Equana

Back to Tutorials

Sparse Matrices

Efficient storage and solvers for large-scale systems

Run AllReset

Sparse matrices store only nonzero elements, making them essential for large-scale problems where most entries are zero. A 10,000 × 10,000 dense matrix requires 800 MB of memory, but if only 0.1% of entries are nonzero the sparse representation needs roughly 0.2 MB — a 4,000× saving.

This tutorial covers every sparse matrix function available in Equana: creation, inspection, manipulation, indexing, iterative linear solvers, eigenvalue and singular value decompositions, and structural analysis. Run the cells in order — variables persist across the notebook.

Creating Sparse Matrices

Equana provides several ways to create sparse matrices, from converting existing dense arrays to generating random sparse patterns.

Converting Dense to Sparse

The sparse(A) function converts a dense matrix into compressed sparse row (CSR) format, discarding zero entries:

Code [1]Run
# Convert a dense matrix to sparse
A = [1, 0, 0; 0, 2, 0; 0, 0, 3]
S = sparse(A)

# Verify it is sparse and check the nonzero count
issparse(S)    # 1
nnz(S)         # 3

Triplet Construction

Build a sparse matrix from row indices, column indices, and values. Indices are 1-based. Optionally specify the matrix dimensions:

Code [2]Run
# sparse(i, j, v, m, n) — 1-based row/col indices
rows = [1, 2, 3, 1]
cols = [1, 2, 3, 3]
vals = [10, 20, 30, 5]
S = sparse(rows, cols, vals, 3, 3)

# Convert back to dense to see the result
full(S)

Empty and Pre-allocated Sparse Matrices

FunctionDescription
sparse(m, n)Empty m-by-n sparse matrix (all zeros)
spalloc(m, n, nz)Same, with a hint for pre-allocated storage
Code [3]Run
# Create an empty 5x5 sparse matrix
S_empty = sparse(5, 5)
nnz(S_empty)    # 0

# Pre-allocate (nz is a hint; JS arrays auto-resize)
S_alloc = spalloc(100, 100, 500)
nnz(S_alloc)    # 0

Sparse Identity

speye(n) creates a sparse n-by-n identity matrix. speye(m, n) creates a rectangular matrix with ones on the main diagonal:

Code [4]Run
# Sparse identity matrices
I3 = speye(3)
full(I3)

# Rectangular sparse identity (3x5)
I35 = speye(3, 5)
full(I35)

Random Sparse Matrices

FunctionDistributionSymmetry
sprand(m, n, d)Uniform on [0, 1)No
sprandn(m, n, d)Standard normalNo
sprandsym(n, d)Normal-basedSymmetric

The density parameter d (0 to 1) controls the fraction of nonzero entries:

Code [5]Run
# Uniform random sparse (10x10, ~30% dense)
R1 = sprand(10, 10, 0.3)
nnz(R1)

# Normal-distributed random sparse (can have negative values)
R2 = sprandn(10, 10, 0.3)
nnz(R2)

# Symmetric random sparse
R3 = sprandsym(10, 0.3)
nnz(R3)

Diagonal Sparse Matrices with spdiags

spdiags(B, d, m, n) places the columns of matrix B along the diagonals specified by vector d. Diagonal offsets: 0 = main diagonal, positive = above, negative = below. This is the standard way to build banded matrices like the classic tridiagonal Laplacian:

Code [6]Run
# Build a 5x5 tridiagonal matrix: [-1, 2, -1]
n = 5
e = ones(n, 1)
T = spdiags([-e, 2*e, -e], [-1, 0, 1], n, n)
full(T)

Importing from Triplet Data with spconvert

spconvert(D) converts a dense matrix where each row is [i, j, v] into a sparse matrix. A final row [m, n, 0] can specify dimensions:

Code [7]Run
# Build sparse from [i, j, v] row data
D = [1, 1, 5; 2, 2, 10; 3, 3, 15]
S = spconvert(D)
full(S)

Inspecting Sparse Matrices

These functions let you query properties of a sparse matrix without converting it to dense form:

FunctionReturns
issparse(S)1 if sparse, 0 otherwise
nnz(S)Number of stored nonzero elements
nonzeros(S)Column vector of nonzero values
nzmax(S)Allocated storage (equals nnz in JS)
full(S)Dense matrix equivalent
Code [8]Run
# Create a sparse matrix for inspection
S = sparse([1, 2, 3, 1], [1, 2, 3, 3], [10, 20, 30, 5], 3, 3)

# Query properties
issparse(S)      # 1
nnz(S)           # 4
nzmax(S)         # 4
nonzeros(S)
Code [9]Run
# Convert to dense when needed (small matrices only!)
A = full(S)

# issparse on a dense matrix returns 0
issparse(A)      # 0

# nnz works on dense matrices too
nnz(A)           # 4

Indexing and Slicing

Sparse matrices support the same dual-indexing syntax as dense arrays:

SyntaxBaseMeaning
S(i, j)1-basedEquana style
S[i, j]0-basedNumPy / C style
S(:, j)1-basedExtract column j
S(i, :)1-basedExtract row i
S[:, j]0-basedExtract column j
S[i, :]0-basedExtract row i
S(:)Flatten to column vector

Indexing a structural zero returns 0:

Code [10]Run
# Build a test matrix
S = sparse([1, 2, 3], [1, 2, 3], [10, 20, 30], 3, 3)

# 1-based scalar access
S(1, 1)      # 10
S(2, 2)      # 20
S(1, 2)      # 0 (structural zero)

# 0-based scalar access
S[0, 0]      # 10
S[1, 1]      # 20
S[0, 1]      # 0
Code [11]Run
# Extract rows and columns
S = sparse([1, 2, 3], [1, 2, 3], [10, 20, 30], 3, 3)

# Column extraction (1-based)
S(:, 2)      # [0; 20; 0]

# Row extraction (1-based)
S(2, :)      # [0, 20, 0]

# 0-based equivalents
S[:, 1]      # [0; 20; 0]
S[1, :]      # [0, 20, 0]
Code [12]Run
# Flatten to dense column vector
S = speye(3)
v = S(:)     # [1, 0, 0, 0, 1, 0, 0, 0, 1]

Manipulating Sparse Matrices

Sparsity Pattern Operations

FunctionDescription
spones(S)Replace all nonzeros with 1 (keep sparsity pattern)
spfun(@func, S)Apply a function to nonzero values only

These functions preserve the sparsity pattern — zeros remain zeros:

Code [13]Run
# spones: replace nonzeros with 1
S = sparse([1, 2, 3], [1, 2, 3], [10, 20, 30], 3, 3)
P = spones(S)
full(P)          # [1 0 0; 0 1 0; 0 0 1]
nnz(P)           # 3 (same pattern as S)
Code [14]Run
# spfun: apply a function handle to nonzero entries
S = sparse([1, 2, 3], [1, 2, 3], [4, 9, 16], 3, 3)

# Square root of nonzeros only
R = spfun(@sqrt, S)
full(R)          # [2 0 0; 0 3 0; 0 0 4]

Solving Sparse Linear Systems

For large sparse systems, direct solvers (like A \ b) become impractical. Iterative solvers approximate the solution through successive refinement.

Available Solvers

SolverBest ForRequires
gmres(A, b)General nonsymmetric systemsNothing special
bicgstab(A, b)General nonsymmetric systemsNothing special
pcg(A, b)Symmetric positive definite (SPD)A must be SPD
lsqr(A, b)Least-squares / rectangular systemsNothing special

All solvers support optional tolerance and iteration-limit parameters. Multiple outputs return convergence diagnostics:

(x, flag, relres, iter) = gmres(A, b, restart, tol, maxit)

  • flag = 0: converged; flag ~= 0: did not converge
  • relres: relative residual norm
  • iter: number of iterations used

GMRES — General Systems

GMRES (Generalized Minimum Residual) works for any square nonsingular sparse matrix. The optional restart parameter controls memory usage for very large systems:

Code [15]Run
# Solve a simple system with GMRES
A = speye(3)
b = [1; 2; 3]
x = gmres(A, b)

# With convergence info
(x, flag, relres, iter) = gmres(A, b)
flag     # 0 = converged

BiCGSTAB — Nonsymmetric Systems

BiCGSTAB (Bi-Conjugate Gradient Stabilized) is another choice for general nonsymmetric systems, often with smoother convergence than GMRES:

Code [16]Run
# Solve with BiCGSTAB
A = speye(3)
b = [4; 5; 6]
x = bicgstab(A, b)

# With tolerance and max iterations
(x, flag, relres, iter) = bicgstab(A, b, 1e-10, 100)
flag     # 0 = converged

PCG — Symmetric Positive Definite Systems

Preconditioned Conjugate Gradient is the most efficient solver when the matrix is symmetric and positive definite (SPD). Many physical problems produce SPD systems:

Code [17]Run
# PCG requires symmetric positive definite matrix
# The identity matrix is trivially SPD
A = speye(3)
b = [7; 8; 9]
x = pcg(A, b)

# With diagnostics
(x, flag, relres, iter) = pcg(A, b, 1e-12, 50)
flag     # 0 = converged

LSQR — Least-Squares Problems

lsqr solves overdetermined or underdetermined systems in the least-squares sense (minimizes ||Ax - b||). It works on rectangular sparse matrices:

Code [18]Run
# LSQR solves least-squares via normal equations
A = speye(3)
b = [1; 2; 3]
(x, flag) = lsqr(A, b)
flag     # 0 = converged

Eigenvalues and Decompositions

For large sparse matrices, computing all eigenvalues or singular values is infeasible. These functions compute a small number of the largest values using iterative methods:

FunctionComputes
eigs(A, k)k largest eigenvalues
svds(A, k)k largest singular values
ilu(A)Incomplete LU factorization
ichol(A)Incomplete Cholesky factorization

Eigenvalues with eigs

Single output returns eigenvalues as a vector. Two outputs return eigenvectors V and a diagonal eigenvalue matrix D:

Code [19]Run
# Find the 3 largest eigenvalues of a sparse identity
I = speye(10)
d = eigs(I, 3)

# With eigenvectors: (V, D) = eigs(A, k)
(V, D) = eigs(I, 3)

Singular Values with svds

Single output returns singular values. Three outputs return the full SVD factors [U, S, V]:

Code [20]Run
# Find the 3 largest singular values
I = speye(10)
s = svds(I, 3)

# Full SVD decomposition: (U, S, V) = svds(A, k)
(U, S, Vt) = svds(I, 3)

Incomplete Factorizations

Incomplete LU (ilu) and incomplete Cholesky (ichol) produce approximate factorizations used as preconditioners for iterative solvers. They are much cheaper than full factorizations:

Code [21]Run
# Incomplete LU factorization
A = speye(5)
(L, U) = ilu(A)

# Incomplete Cholesky (for SPD matrices)
L_chol = ichol(A)

Structural Analysis

sprank(A) computes the structural rank of a sparse matrix — an upper bound on the numerical rank determined by the sparsity pattern alone. It uses a greedy maximum bipartite matching algorithm:

Code [22]Run
# Structural rank of sparse identity
r1 = sprank(speye(5))    # 5 (full rank)

# Structural rank of a sparse diagonal
S = sparse([1, 2, 3], [1, 2, 3], [10, 20, 30], 5, 5)
r2 = sprank(S)            # 3 (rank-deficient)

Practical Example: 1D Laplacian

A classic application of sparse matrices is discretizing differential operators. The 1D Laplacian on a uniform grid with n interior points produces the tridiagonal matrix:

[ 2 -1 0 ... 0 ] T = [-1 2 -1 ... 0 ] [ ... ] [ 0 ... -1 2 -1] [ 0 ... 0 -1 2]

We build this with spdiags, then solve T * u = f using pcg (since T is symmetric positive definite):

Code [23]Run
# Build the 1D discrete Laplacian (n interior points)
n = 8
e = ones(n, 1)
T = spdiags([-e, 2*e, -e], [-1, 0, 1], n, n)

# Verify properties
issparse(T)     # 1
nnz(T)          # 22 (3*n - 2)
full(T)
Code [24]Run
# Right-hand side: constant source f = 1 at every grid point
f = ones(n, 1)

# Solve T * u = f using PCG (T is SPD)
(u, flag) = pcg(T, f, 1e-10, 100)
flag     # 0 = converged
Code [25]Run
# Inspect the solution and structural rank
sprank(T)     # n (full structural rank)

Quick Reference

Creation Functions

FunctionDescription
sparse(A)Convert dense matrix to sparse
sparse(i, j, v, m, n)Build from triplets (1-based indices)
sparse(m, n)Empty m-by-n sparse matrix
speye(n) / speye(m, n)Sparse identity matrix
sprand(m, n, d)Random uniform sparse
sprandn(m, n, d)Random normal sparse
sprandsym(n, d)Random symmetric sparse
spdiags(B, d, m, n)Build from diagonals
spalloc(m, n, nz)Pre-allocate empty sparse
spconvert(D)Convert [i j v] rows to sparse

Inspection & Manipulation

FunctionDescription
issparse(S)Test if sparse (returns 1 or 0)
nnz(S)Count nonzero elements
nonzeros(S)Extract nonzero values as vector
nzmax(S)Allocated storage size
full(S)Convert to dense matrix
spones(S)Replace nonzeros with 1
spfun(@f, S)Apply function to nonzeros
spparmsCompatibility stub

Iterative Solvers

FunctionSystem Type
gmres(A, b)General
bicgstab(A, b)General
pcg(A, b)Symmetric positive definite
lsqr(A, b)Least-squares / rectangular

Decompositions & Analysis

FunctionComputes
eigs(A, k)k largest eigenvalues
svds(A, k)k largest singular values
ilu(A)Incomplete LU factorization
ichol(A)Incomplete Cholesky factorization
sprank(A)Structural rank

Summary

This tutorial covered the full sparse matrix toolkit in Equana:

  • Creating sparse matrices with sparse, speye, sprand, sprandn, sprandsym, spdiags, spalloc, and spconvert
  • Inspecting with issparse, nnz, nonzeros, nzmax, and full
  • Indexing and slicing with 1-based () and 0-based [] notation, including column/row extraction and flattening
  • Manipulating sparsity patterns with spones and spfun
  • Iterative solversgmres, bicgstab, pcg, and lsqr — for large linear systems
  • Eigenvalue and singular value computation with eigs and svds
  • Incomplete factorizations with ilu and ichol for preconditioning
  • Structural analysis with sprank
  • A practical 1D Laplacian example combining spdiags and pcg

Next Steps

  • Explore the Working with Matrices tutorial for dense array fundamentals
  • Try the FEM tutorial to see sparse matrices used in finite element analysis
  • Check the roadmap for upcoming features like reordering algorithms

Workbench

Clear
No variables in workbench

Next Steps