Skip to content

Eigenvalue Decomposition — Power Iteration, Jacobi, Francis QR

Algorithm Flow

Power Iteration

Goal: Find the dominant eigenvalue \(\lambda_1\) (largest magnitude) and corresponding eigenvector \(\mathbf{v}_1\) of matrix \(A\).

Algorithm:

  1. Initialize random vector \(\mathbf{v}^{(0)}\) (implementation uses sum of column absolute values)
  2. For \(k = 0, 1, \dots\):

  3. \(\mathbf{w}^{(k+1)} = A \cdot \mathbf{v}^{(k)}\)

  4. \(\mathbf{v}^{(k+1)} = \mathbf{w}^{(k+1)} / \|\mathbf{w}^{(k+1)}\|_2\)
  5. \(\lambda^{(k+1)} = \left(\mathbf{v}^{(k+1)}\right)^T \cdot A \cdot \mathbf{v}^{(k+1)}\) (Rayleigh quotient)
  6. Convergence: \(|\lambda^{(k+1)} - \lambda^{(k)}| < \text{tolerance} \cdot |\lambda^{(k+1)}|\)

Convergence Rate: Linear, ratio \(|\lambda_2 / \lambda_1|\). Slow when eigenvalues are close.

Limitations:

  • Returns only one eigenpair
  • Requires \(|\lambda_1| > |\lambda_2|\)
  • Initial vector must have component in \(\mathbf{v}_1\) direction

Inverse Power Iteration

Goal: Find the smallest magnitude eigenvalue \(\lambda_n\) and its eigenvector (fundamental frequency in structural dynamics).

Principle: Apply power iteration to \(A^{-1}\). The largest eigenvalue of \(A^{-1}\) is \(1/\lambda_n\), so it converges to \(\lambda_n\).

Algorithm:

  1. Initialize \(\mathbf{v}^{(0)}\) (alternating signs to avoid alignment with dominant eigenvector)
  2. For \(k = 0, 1, \dots\):

  3. Solve \(A \cdot \mathbf{y} = \mathbf{v}^{(k)}\) (i.e., \(\mathbf{y} = A^{-1} \mathbf{v}^{(k)}\))

  4. \(\mathbf{v}^{(k+1)} = \mathbf{y} / \|\mathbf{y}\|_2\)
  5. \(\lambda^{(k+1)} = \left(\mathbf{v}^{(k+1)}\right)^T \cdot A \cdot \mathbf{v}^{(k+1)}\)
  6. Same convergence check as power iteration

Key Differences:

  • Each iteration solves a linear system (\(O(n^3)\)) vs matrix-vector multiply (\(O(n^2)\))
  • Requires \(A\) to be invertible (non-singular)
  • Usually converges faster (small eigenvalues are better separated in relative terms)

Jacobi Eigenvalue Decomposition (Symmetric)

Goal: Compute all real eigenvalues \(\lambda_1, \dots, \lambda_n\) and orthogonal eigenvector matrix \(V\) for symmetric \(A\).

Principle: Diagonalize \(A\) via a sequence of Givens rotations (orthogonal similarity transforms): \(V^T \cdot A \cdot V = \Lambda\).

Algorithm:

  1. Initialize \(V = I_n\), workspace \(B = A\)
  2. Sweep all upper off-diagonal pairs \((p, q), p < q\):

  3. If \(|B_{pq}| > \text{tolerance}\):

    • Compute rotation angle: [ \tau = \frac{B{qq} - B,\quad t = \frac{\text{sign}(\tau)}{|\tau| + \sqrt{1+\tau^2}},\quad c = \frac{1}{\sqrt{1+t^2}},\quad s = t \cdot c ]}}{2 B_{pq}
    • Apply: \(B \leftarrow J^T \cdot B \cdot J\), \(V \leftarrow V \cdot J\)
    • Repeat full sweeps until \(\max |B_{pq}| < \text{tolerance}\)
    • \(\lambda_i = B_{ii}\), eigenvectors are columns of \(V\)

Properties:

  • High accuracy: excellent orthogonality retention for symmetric matrices
  • Complexity: ~\(O(n^3)\) per sweep, typically \(3{-}5\) sweeps needed
  • Best for: small symmetric matrices (\(n \leq 10\), e.g. stiffness/mass matrices)

Francis QR Eigenvalue Decomposition (General)

Goal: Compute all eigenvalues (possibly complex; current implementation returns real parts only).

Algorithm (simplified: single-shift QR):

Phase 1 — Hessenberg reduction:

  • For each column \(k = 0, \dots, n-3\):

  • Build Householder reflector to zero \(A[k+1..n-1, k]\) below subdiagonal

  • Apply: \(A \leftarrow H^T \cdot A \cdot H\)
  • Result: upper Hessenberg form (zeros below first subdiagonal)

Phase 2 — QR Iteration with deflation:

For \(m = n, n-1, \dots, 2\):

  1. If \(|H_{m,m-1}| < \text{tolerance} \cdot (|H_{mm}| + |H_{m-1,m-1}|)\), treat as zero → deflate
  2. Compute Wilkinson shift \(\mu\) from \(H[m-1..m, m-1..m]\) submatrix
  3. Single QR step:

  4. \([Q, R] = \text{qr}(H[0..m-1, 0..m-1] - \mu I)\)

  5. \(H[0..m-1, 0..m-1] = R \cdot Q + \mu I\)
  6. Accumulate eigenvectors: \(V \leftarrow V \cdot Q\)

Phase 3 — Extract eigenvalues:

  • 1×1 blocks: \(\lambda = H_{ii}\) (real)
  • 2×2 blocks: compute submatrix eigenvalues (complex conjugate pairs; real part only returned)

Properties:

  • General: handles non-symmetric matrices
  • Convergence: typically quadratic with Wilkinson shift
  • Complexity: \(O(n^2)\) per iteration (exploiting Hessenberg), \(O(n^3)\) total
  • Limitation: complex eigenvalues currently return real parts only

Method Selection Guide:

Scenario Recommended Method
Only largest eigenpair power_iteration()
Only smallest eigenpair (fundamental freq.) inverse_power_iteration()
Symmetric matrix, all eigenvalues eigendecompose_jacobi()
Non-symmetric matrix, all eigenvalues eigendecompose_qr()
Unknown matrix type eigendecompose() (auto-select)

LINEAR ALGEBRA - Eigenvalues & Eigenvectors

Struct: Mat::EigenPair

Mat::EigenPair::EigenPair();
// fields:
// float eigenvalue;      // eigenvalue (largest-magnitude for power_iteration, smallest for inverse_power_iteration)
// Mat eigenvector;       // corresponding eigenvector (n x 1)
// int iterations;        // number of iterations (for iterative methods)
// tiny_error_t status;   // computation status (TINY_OK / error code)

Description:

Container for a single eigenvalue/eigenvector result and related metadata. Typically returned by power_iteration or inverse_power_iteration.

Insight:

The pair is intentionally minimal: one eigenvalue, one eigenvector, and convergence metadata. That makes it practical for iterative methods where you care about one mode, not the full spectrum.

Struct: Mat::EigenDecomposition

Mat::EigenDecomposition::EigenDecomposition();
// fields:
// Mat eigenvalues;    // n x 1 matrix storing eigenvalues
// Mat eigenvectors;   // n x n matrix, columns are eigenvectors
// int iterations;     // iterations used by the algorithm
// tiny_error_t status; // computation status

Description:

Container for a full eigendecomposition result (all eigenvalues and eigenvectors).

Insight:

This is a full spectral snapshot. It is valuable when the matrix is small enough that the cost and memory are acceptable, and when you need more than one mode or a basis change.

Power Iteration (dominant eigenpair)

Mat::EigenPair Mat::power_iteration(int max_iter, float tolerance) const;

Description:

Compute the dominant (largest-magnitude) eigenvalue and its eigenvector using the power iteration method. Fast method suitable for real-time SHM applications to quickly identify primary frequency.

Mathematical Principle:

Power iteration finds the eigenvalue with the largest absolute value by iteratively applying the matrix to a vector:

  1. Start with random vector v₀

  2. Iterate: vₖ₊₁ = A * vₖ / ||A * vₖ||

  3. Eigenvalue estimate: λₖ = (vₖ^T * A * vₖ) / (vₖ^T * vₖ) (Rayleigh quotient)

Convergence:

The method converges to the dominant eigenvalue if:

  • The dominant eigenvalue is unique (|λ₁| > |λ₂| ≥ ... ≥ |λₙ|)

  • The initial vector has a non-zero component in the direction of the dominant eigenvector

Parameters:

  • int max_iter : Maximum number of iterations (typical default: 1000).

  • float tolerance : Convergence tolerance (e.g. 1e-6). Convergence is checked by |λₖ - λₖ₋₁| < tolerance * |λₖ|.

Returns:

EigenPair containing eigenvalue, eigenvector, iterations, and status.

Usage Insights:

  • Real-Time Applications: Fast convergence for well-separated eigenvalues, suitable for real-time structural health monitoring.

  • Initialization: The implementation uses a smart initialization strategy (sum of column absolute values) to avoid convergence to smaller eigenvalues.

  • Convergence Rate: Convergence is linear with rate |λ₂|/|λ₁|. Slower when eigenvalues are close.

  • Limitations:

  • Only finds one eigenvalue-eigenvector pair

  • Requires |λ₁| > |λ₂| (dominant eigenvalue must be unique)

  • May converge slowly if eigenvalues are close

  • Applications:

  • Principal component analysis (first principal component)

  • PageRank algorithm

  • Structural dynamics (fundamental frequency)

Pitfalls:

  • Only one mode: It will not tell you the rest of the spectrum.

  • Dominance requirement: If the largest-magnitude eigenvalue is not clearly separated, convergence slows or becomes ambiguous.

  • Scaling matters: Poorly scaled matrices can slow convergence or worsen the eigenvector estimate.

Inverse Power Iteration (smallest eigenpair)

Mat::EigenPair Mat::inverse_power_iteration(int max_iter, float tolerance) const;

Description:

Compute the smallest (minimum magnitude) eigenvalue and its eigenvector using the inverse power iteration method. Critical for system identification - finds fundamental frequency/lowest mode in structural dynamics. This method is essential for SHM applications where the smallest eigenvalue corresponds to the fundamental frequency of the system.

Mathematical Principle:

Inverse power iteration applies power iteration to A^(-1), which has eigenvalues 1/λᵢ. Since 1/λₙ is the largest eigenvalue of A^(-1), the method converges to the smallest eigenvalue of A:

  1. Start with vector v₀

  2. Iterate: Solve A * yₖ = vₖ, then vₖ₊₁ = yₖ / ||yₖ||

  3. Eigenvalue estimate: λₖ = (vₖ^T * A * vₖ) / (vₖ^T * vₖ) (Rayleigh quotient)

Convergence:

Converges to the smallest eigenvalue if:

  • The smallest eigenvalue is unique (|λₙ| < |λₙ₋₁| ≤ ... ≤ |λ₁|)

  • Matrix A is invertible (non-singular)

  • Initial vector has component in direction of smallest eigenvector

Parameters:

  • int max_iter : Maximum number of iterations (default: 1000).

  • float tolerance : Convergence tolerance (default: 1e-6). Uses relative tolerance: |λₖ - λₖ₋₁| < tolerance * max(|λₖ|, 1.0).

Returns:

EigenPair containing the smallest eigenvalue, eigenvector, iterations, and status.

Algorithm Steps:

  1. Initialize normalized eigenvector v (with alternating signs to avoid alignment with dominant eigenvector)
  2. Iterate: Solve A * y = v (equivalent to y = A^(-1) * v) using solve()
  3. Normalize y to get new v
  4. Compute eigenvalue estimate using Rayleigh quotient: λ = (v^T * A * v) / (v^T * v)
  5. Check convergence using relative tolerance

Usage Insights:

  • System Identification: Essential for finding fundamental frequencies in structural dynamics, where the smallest eigenvalue corresponds to the lowest natural frequency.

  • Numerical Stability: The implementation includes checks for singular matrices and handles near-singular cases gracefully.

  • Initialization Strategy: Uses alternating sign pattern to avoid convergence to larger eigenvalues, ensuring convergence to the smallest eigenvalue.

  • Performance: Each iteration requires solving a linear system (O(n³) for dense matrices), but typically converges in fewer iterations than power iteration.

  • Complementary to Power Iteration:

  • Power iteration: finds λ_max (highest frequency)

  • Inverse power iteration: finds λ_min (fundamental frequency)

  • Together they provide the frequency range of the system

  • Applications:

  • Structural health monitoring (fundamental frequency detection)

  • Modal analysis (lowest mode shape)

  • System identification

  • Stability analysis (smallest eigenvalue indicates stability margin)

Notes:

  • Requires a square matrix and non-null data pointer; returns an error status otherwise.

  • The matrix must be invertible (non-singular) for this method to work. If the matrix is singular or near-singular, the method will fail gracefully.

  • Inverse power iteration only returns the smallest eigenpair. For full spectrum, use eigendecomposition functions below.

  • This method is complementary to power iteration: power iteration finds the largest eigenvalue, while inverse power iteration finds the smallest eigenvalue.

Pitfalls:

  • Requires invertibility: Singular or near-singular matrices can make each iteration unstable or impossible.

  • Solve step dominates cost: Each iteration calls a linear solver, so the method is only attractive when you need a single extremal eigenpair.

  • Smallest magnitude is not always smallest numeric value: For signed spectra, “minimum magnitude” and “most negative” are different concepts.

Jacobi Eigendecomposition (symmetric matrices)

Mat::EigenDecomposition Mat::eigendecompose_jacobi(float tolerance, int max_iter) const;

Description:

Compute full eigendecomposition using the Jacobi method. Recommended for symmetric matrices (good accuracy and stability for structural dynamics applications). Robust and accurate, ideal for structural dynamics matrices in SHM.

Mathematical Principle:

The Jacobi method diagonalizes a symmetric matrix through a series of orthogonal similarity transformations (Givens rotations):

  1. Find largest off-diagonal element aₚq

  2. Compute rotation angle θ to zero this element

  3. Apply rotation: A' = J^T * A * J, where J is the rotation matrix

  4. Repeat until all off-diagonal elements are below tolerance

Convergence:

The method converges when the maximum off-diagonal element is below tolerance. Each rotation zeros one off-diagonal element, and the process continues until the matrix is diagonal.

Parameters:

  • float tolerance : Convergence threshold (e.g. 1e-6). Maximum allowed magnitude of off-diagonal elements.

  • int max_iter : Maximum iterations (e.g. 100). Typically converges in O(n²) iterations for n×n matrices.

Returns:

EigenDecomposition with eigenvalues, eigenvectors, iterations, and status.

Usage Insights:

  • Symmetric Matrices: Designed for symmetric matrices. For non-symmetric matrices, use QR method.

  • Numerical Stability: Very stable for symmetric matrices, with good preservation of orthogonality.

  • Accuracy: High accuracy, suitable for applications requiring precise eigenvalue/eigenvector pairs.

  • Performance: O(n³) per iteration, but typically requires fewer iterations than QR for symmetric matrices.

  • Applications:

  • Structural dynamics: Stiffness and mass matrices are symmetric

  • Principal Component Analysis (PCA)

  • Spectral clustering

  • Quadratic forms optimization

Notes:

If the matrix is not approximately symmetric the function will warn, though it may still run. For non-symmetric matrices prefer the QR method.

Pitfalls:

  • Symmetry is the contract: The method is designed around orthogonal similarity transforms. Non-symmetric input breaks the assumption behind the algorithm.

  • Iteration budget: Very tight tolerances may require many rotations; keep an eye on max_iter.

  • Off-diagonal noise: Floating-point cleanup can leave tiny residuals. Treat the result as approximate diagonalization.

QR Eigendecomposition (general matrices)

Mat::EigenDecomposition Mat::eigendecompose_qr(int max_iter, float tolerance) const;

Description:

Compute eigendecomposition using the QR algorithm. Works for general (possibly non-symmetric) matrices. Supports non-symmetric matrices, but may have complex eigenvalues (only real part returned).

Mathematical Principle:

The QR algorithm iteratively applies QR decomposition:

  1. Start with A₀ = A

  2. For k = 0, 1, 2, ...: Compute QR decomposition: Aₖ = Qₖ * Rₖ, then update: Aₖ₊₁ = Rₖ * Qₖ

  3. Aₖ converges to upper triangular form (Schur form), with eigenvalues on the diagonal

Convergence:

The algorithm converges when Aₖ is approximately upper triangular (sub-diagonal elements < tolerance). The eigenvalues appear on the diagonal, and eigenvectors are accumulated from Q matrices.

Parameters:

  • int max_iter : Maximum number of QR iterations (default: 100).

  • float tolerance : Convergence tolerance (e.g. 1e-6). Uses relative tolerance comparing sub-diagonal elements to diagonal elements.

Returns:

EigenDecomposition containing eigenvalues, eigenvectors, iterations and status.

Usage Insights:

  • General Matrices: Can handle non-symmetric matrices, unlike Jacobi method.

  • Complex Eigenvalues: Non-symmetric matrices may have complex eigenvalues; current implementation returns real parts only.

  • Numerical Stability: Uses modified Gram-Schmidt with re-orthogonalization for improved stability.

  • Performance: O(n³) per iteration. May require many iterations for convergence, especially for ill-conditioned matrices.

  • Convergence Acceleration: The implementation could benefit from shifts (Wilkinson shift) for faster convergence, but current version uses basic QR iteration.

  • Applications:

  • General matrix eigenvalue problems

  • Dynamical systems analysis

  • Control theory (system poles)

Notes:

QR uses Gram–Schmidt for Q/R in this implementation; it can be less stable for ill-conditioned matrices. For symmetric matrices, Jacobi is preferred due to better stability and accuracy.

Pitfalls:

  • Complex spectrum caveat: The current interface keeps only the real part of eigenvalues, so it is not a full complex eigensolver.

  • Convergence can be slow: Without shifts, QR can need many iterations on difficult matrices.

  • Orthogonalization quality matters: Because Q is built through Gram-Schmidt, ill-conditioned inputs can degrade the iteration.

Automatic Eigendecomposition

Mat::EigenDecomposition Mat::eigendecompose(float tolerance = 1e-6f, int max_iter = 100) const;

Description:

Convenience interface that automatically selects the optimal algorithm based on matrix properties. It tests symmetry with is_symmetric(tolerance * 10.0f). If approximately symmetric, it uses Jacobi; otherwise it runs QR. Convenient interface for edge computing applications.

Algorithm Selection:

  1. Test if matrix is symmetric: is_symmetric(tolerance * 10.0f)
  2. If symmetric → use eigendecompose_jacobi(tolerance, max_iter) (more stable and accurate)
  3. If not symmetric → use eigendecompose_qr(max_iter, tolerance) (handles general matrices)

Parameters:

  • float tolerance : Used for symmetry test and decomposition convergence (default 1e-6f).
  • int max_iter : Maximum number of iterations (must be > 0, default = 100).

Returns:

EigenDecomposition containing all eigenvalues and eigenvectors.

Usage Insights:

  • Automatic Optimization: Saves the user from manually choosing the algorithm, while still providing optimal performance.

  • Edge Computing: Ideal for embedded systems where you want good performance without manual tuning.

  • Robustness: The symmetry test uses a relaxed tolerance (10×) to handle numerical errors, ensuring symmetric matrices are correctly identified.

Usage Tips:

  • Known Symmetry: If the matrix is known to be symmetric (e.g. stiffness or mass matrices), call eigendecompose_jacobi directly for best stability and slightly better performance.

  • Unknown Properties: For general matrices or unknown symmetry, use eigendecompose for automatic selection.

  • Performance Considerations:

  • Eigendecomposition is computationally expensive for large matrices on embedded platforms

  • For n > 20, consider reduced-order methods or iterative methods (power iteration) when only a few eigenvalues are needed
  • For real-time applications, use power_iteration() or inverse_power_iteration() for single eigenvalues

  • Memory Usage: Full eigendecomposition requires storing all eigenvectors (n×n matrix), which can be memory-intensive for large matrices.

Pitfalls:

  • Automatic does not mean free: The symmetry test and the decomposition both cost time. If you already know the matrix class, call the specialized method directly.

  • Tolerance is dual-purpose: It influences both symmetry detection and convergence behavior, so changing it affects algorithm selection as well as numerical stopping.

  • Large matrices: For embedded use, full eigendecomposition can dominate both compute time and RAM.