4.8. Faster Methods for Solving for Tridiagonal and Banded matrices, and Strict Diagonal Dominance#
Reference:
Section 6.6 Special Types of Matrices in [Burden et al., 2016], the sub-sections on Band Matrices and Tridiagonal Matrices.
Tridiagonal systems#
Differential equations often lead to the need to solve systems of equations
Definition 4.8 (Tridiagonal matrix)
A matrix
with all “missing” entries being zeros.
The notation used here suggests one efficient way to store such a matrix: as three 1D arrays
(Such equations also arise in other important situations, such as spline interpolation)
It can be verified that LU factorization preserves all the non-zero values, so that the Doolittle algorithm — if it succeeds without any division by zero — gives
Note that the first non-zero element in each column is unchanged, as with a full matrix, but now it means that the upper diagonal elements
Again, one way to describe and store this information is with just the two new 1D arrays
Algorithms#
Algorithm 4.11 (LU factorization)
for i from 2 to n
end
Algorithm 4.12 (Forward substitution)
for i from 2 to n
end
Algorithm 4.13 (Backward substitution)
for i from n-1 down to 1
end
Generalizing to banded matrices#
As we have seen, approximating derivatives to higher order of accuracy and approximating derivatives of order greater than two requires more than three nodes, but the locations needed are all close to the ones where the derivative is being approximated.
For example, the simplest symmetric approximation of the fourth derivative
This is a penta-digonal matrix, and an example of the larger class of banded matrices: ones in which all the non-zero elements have indices
Let us recap the general Doolittle algorithm for computing an LU factorization:
Algorithm 4.14 (Doolittle algorithm for computing an LU factorization)
The top row is unchanged:
for j from 1 to n
end
The left column requires no sums:
for i from 2 to n
end
The main loop:
for k from 2 to n
for j from k to n
end
for i from k+1 to n
end
end
Eliminating redundant calculation in the above#
With a banded matrix, many of the entries at right are zero, particularly in the two sums, which is where most of the operations are.
Thus we can rewrite, exploiting the fact that all elements with indices
Algorithm 4.15 (LU factorization of a banded matrix)
The top row is unchanged:
for j from 1 to 1+q
end
The top non-zero diagonal is unchanged:
for k from 1 to n - q
end
The left column requires no sums:
for i from 2 to 1+p
end
The main loop:
for k from 2 to n
for j from k to min(n, k+q-1)
end
for i from k+1 to min(n,k+p-1)
end
end
It is common for a banded matrix to have equal band-width on either side,
Algorithm 4.16 (LU factorization of a banded matrix,
The top row is unchanged:
for j from 1 to 1+p
end
The top non-zero diagonal is unchanged:
for k from 1 to n - p
end
The left column requires no sums:
for i from 2 to 1+p
end
The main loop:
for k from 2 to n
for j from k to min(n, k+p-1)
end
for i from k+1 to min(n,k+p)
end
end
Strict diagonal dominance helps again#
These algorithms for banded matrices do no pivoting, and that is highly desirable, because pivoting creates non-zero elements outside the “band” and so can force one back to the general algorithm. Fortunately, we have seen one case where this is fine: the matrix being either row-wise or column-wise strictly diagonally dominant.