3.4. Solving \(Ax = b\) with LU factorization#

References:

Avoiding repeated calculation, excessive rounding and messy notation: LU factorization#

Putting aside pivoting for a while, there is another direction in which the algorithm for solving linear systems \(Ax=b\) can be improved. It starts with the idea of being more efficient when solving multiple system with the same right-hand side: \(A x^{(m)} = b^{(m)}, m=1,2, \dots\).

However it has several other benefits:

  • allowing a strategy to reduce rounding error, and

  • a simpler, more elegant mathematical statement.

We will see how to merge this with partial pivoting in Solving Ax = b With Both Pivoting and LU Factorization

Some useful jargon:

Definition 3.5 (Triangular matrix)

A matrix is triangular if all its non-zero elements are either on the main diagonal or to one side of it. There are two possibilities:

  • Matrix \(U\) is upper triangular if \(u_{ij} = 0\) for all \(i > j\).

  • Matrix \(L\) is lower triangular if \(l_{ij} = 0\) for all \(j > i\).

One important example of an upper triangular matrix is \(U\) formed by row reduction; note well that it is much quicker and easier to solve \(Ux = c\) than the original system \(Ax=b\) exactly because of its triangular form.

We will soon see that the multipliers \(l_{ij}\), \(i > j\) for row reduction that were introduced in Row Reduction/Gaussian Elimination help to form a very useful lower triangular matrix \(L\).

The key to the LU factorization idea is finding a lower triangular matrix \(L\) and an upper triangular matrix \(U\) such that \(L U = A\), and then using the fact that it is far quicker to solve a linear system when the corresponding matrix is triangular.

Indeed we will see that, if naive Gaussian elimination for \(Ax=b\) succeeds, giving row-reduced form \(Ux = c\):

  1. The matrix \(A\) can be factorized as \(A = LU\) with \(U\) an \(n \times n\) upper triangular matrix and \(L\) an \(n \times n\) lower triangular matrix.

  2. There is a unique such factorization with the further condition that \(L\) is unit lower triangular, which means the extra requirement that the value on its main diagonal are unity: \(l_{k,k} = 1\). This is called the Doolittle Factorization of \(A\).

  3. In the Doolittle factorization, the matrix \(U\) is the one given by naive Gaussian elimination, and the elements of \(L\) below its main diagonal are the multipliers arising in naive Gaussian elimination. (The other elements of \(L\), on and above the main diagonal, are the ones and zeros dictated by it being unit lower triangular: the same as for those elements in the \(n \times n\) identity matrix.)

  4. The transformed right-hand side \(c\) arising from naive Gaussian elimination is the solution of the system \(Lc = b\), and this is solvable by an procedure caled forward substitution, very similar to the backward subsitution used to solve \(Ux = c\).

Putting all this together: if naive Gaussian elimination works for \(A\), we can introduce the name \(c\) for \(Ux\), and note that \(Ax = (LU)x = L(Ux) = Lc = b\). Then solving of the system \(Ax = b\) can be done in three steps:

  1. Using \(A\), find the Doolittle factors, \(L\) and \(U\).

  2. Using \(L\) and \(b\), solve \(Lc = b\) to get \(c\). (Forward substitution)

  3. Using \(U\) and \(c\), solve \(Ux = c\) to get \(x\). (Backward substitution)

The direct method for the Doolittle LU factorization#

If you believe the above claims, we already have one algorithm for finding an LU factorization; basically, do naive Gaussian elimination, but ignore the right-hand side \(b\) until later. However, there is another “direct” method, which does not rely on anything we have seen before about Gaussian elimination, and has other advantages as we will see.

(If I were to teach linear algebra, I would be tempted to start here and skip Gaussian Elimination!)

This method starts by considering the apparently daunting task of solving the \(n^2\) simultaneous and nonlinear equations for the initially unknown elements of \(L\) and \(U\):

\[ \sum_{k=1}^n l_{i,k} u_{k,j} = a_{i,j}\; 1 \leq i \leq n,\;1 \leq j \leq n. \]

The first step is to insert the known information; the already-known values of elements of \(L\) and \(U\). For one thing, the sums above stop when either \(k=i\) or \(k=j\), whichever comes first, due to all the zeros in \(L\) nd \(U\):

\[ \sum_{k=1}^{\min(i,j)} l_{i,k} u_{k,j} = a_{i,j}\; 1 \leq i \leq n,\;1 \leq j \leq n. \]

Next, when \(i \leq j\) — so that the sum ends at \(k=i\) and involves \(l_{i,i}\) — we can use \(l_{i,i} = 1\).

So break up into two cases:

On and above the main diagonal (\(i \leq j\), so \(\min(i,j) = i\)):

\[\sum_{k=1}^{i-1}l_{i,k} u_{k,j}+u_{i,j}=a_{i,j} \quad 1\leq i \leq n,\; i \leq j \leq n.\]

Below the main diagonal (\(i > j\), so \(\min(i,j) = j\)):

\[\sum_{k=1}^{j-1} l_{i,k} u_{k,j} + l_{i,j} u_{j,j}= a_{i,j} \quad 2 \leq i \leq n,\;1 \leq j \leq i.\]

In each equation, the last term in the sum has been separated, so that we can use them to “solve” for an unknown:

\[\begin{split} \begin{split} u_{i,j} &= a_{i,j} - \sum_{k=1}^{i-1} l_{i,k} u_{k,j} \quad 1 \leq i \leq n,\;i \leq j \leq n. \\ l_{i,j} &= \frac{a_{i,j} - \sum_{k=1}^{j-1} l_{i,k} u_{k,j}}{u_{j,j}} \quad 2 \leq i \leq n,\;1 \leq j \leq i. \end{split} \end{split}\]

Here comes the characteristic step that gets us from valid equations to a useful algorithm: we can arrange these equations in an order such that all the values at right are determined by an earlier equation!

First look at what they say for the first row and first column.

With \(i=1\) in the first equation, there is no sum, and so:

\[u_{1,j}=a_{1,j}, \quad 1 \leq j \leq n,\]

which is the familiar fact that the first row is unchanged in naive Gaussian elimination.

Next, with \(j=1\) in the second equation, there is again no sum:

\[l_{i,1} = \frac{a_{i,1}}{u_{1,1}}, = \frac{u_{i,1}}{u_{1,1}}, \quad 2 \leq i \leq n,\]

which is indeed the multipliers in the first step of naive Gaussian elimination.

Remember that one way to think of Gaussian elimination is recursively: after step \(k\), one just applies the same process recursively to the smaller \(n-k \times n-k\) matrix in the bottom-right-hand corner. We can do something similar here; at stage \(k\):

  1. First use the first of the above equations to solve first for row \(k\) of \(U\), meaning just \(u_{k,j}, j \geq k\),

  2. Then use the second equation to solve for column \(k\) of \(L\): \(l_{i,k}, i > k\).

Algorithm 3.5 (Doolittle factorization)

Stage \(k=1\) is handled by the simpler special equations above, and for the rest:

for k from 2 to n
\(\quad\) for j from k to n \(\qquad\) Get the non-zero elements in row \(k\) of \(U\)
\(\quad\quad\) \(u_{k,j}=a_{k,j} - \sum_{s=1}^{k-1}l_{k,s}u_{s,j}\)
\(\quad\) end
\(\quad\) for i from k+1 to n \(\qquad\) Get the non-zero elements in column \(k\) of \(L\) (except the 1’s on its diagonal)
\(\quad\quad\) \(l_{i,k}=\displaystyle\frac{a_{i,k}-\sum_{s=1}^{k-1}l_{i,s}u_{s,k}}{u_{k,k}}\)
\(\quad\) end
end

Note well that in the formulas to evaluate at the right,

  1. The terms \(l_{k,s}\) are for \(s < k\), so from a column \(s\) that has already been computed for a previous \(k\) value.

  2. The terms \(u_{s,j}\) are for \(s < k\), so from a row \(s\) that has already been computed for a previous \(k\) value.

  3. The denominator \(u_{k,k}\) in the second inner loop is computed just in time, in the first inner loop for the same \(k\) value.

So the only thing that can go wrong is the same as with Gaussian elimination: a zero pivot element \(u_{k,k}\).

Remark 3.14 (On this algorithm)

  1. For \(k=n\), the second inner loop is redundant, so could be eliminated. Indeed it might need to be eliminated in actual code, where “empty loops” might not be allowed. On the other hand, allowing empty loops makes the above correct also for \(k=1\); then the for k loop encompases the entire factorization algorithm.

  2. This direct factorization algorithm avoids any intermediate modification of arrays, and thus eliminates all those superscripts like \(a_{i,j}^{(k)}\). This is not only nicer mathematically, but can help to avoid mistakes like code that inadvertently modifies the array containing the matrix \(A\) and then uses it to compute the residual, \(b - Ax\). More generally, such purely mathematical statements of algorithms can help to avoid coding errors; this is part of the philosophy of the functional programming approach.

  3. Careful examination shows that the product \(l_{k,s} u_{s,j}\) that is part of what is subtracted at location \((k,j)\) is the same as what is subtracted there at stage \(k\) of Gaussian elimination, just with different names. More generally, every piece of arithmetic is the same as before, except arranged in a different order, so that the \(k-1\) changes made to an element in row \(k\) are done together, via those sums.

include("NumericalMethods.jl")
using .NumericalMethods: printmatrix
function lu_factorize(A; demomode=false)
    # Compute the Doolittle LU factorization of A.
    # Sums like $\sum_{s=1}^{k-1} l_{k,s} u_{s,j}$ are done as matrix products;
    # in the above case, row matrix L[k, 1:k-1] by column matrix U[1:k-1,j] gives the sum for a give j,
    # and row matrix L[k, 1:k-1] by matrix U[1:k-1,k:n] gives the relevant row vector.
    
    n = size(A)[1]  # First component of the array's size; size(A) returns "(rows, columns)"
    # Initialize U as a zero matrix;
    # correct below the main diagonal, with the other entries to be computed and filled below.
    U = zeros(n,n)
    
    # Initialize L as a zero matrix;
    # correct above the main diagonal, with the other entries to be computed and filled in below.
    L = zeros(n,n)
    
    # The first row and columm are special:
    U[1,:] = A[1,:]
    L[1,1] = 1.0
    L[2:end,1] = A[2:end,1]/U[1,1]
    if demomode
        println("After step k=1")
        println("U="); printmatrix(U)
        println("L="); printmatrix(L)
    end;
    for k in 2:n-1
        # Julia note: it is necessary to use indices "[k]" and so on to get one-row matrices instead of vectors.
        U[[k],k:end] = A[[k],k:end] - L[[k],1:k] * U[1:k,k:end]
        L[k,k] = 1.0
        L[k+1:end,k] = (A[k+1:end,k] - L[k+1:end,1:k] * U[1:k,k]) / U[k,k]
        if demomode
            println("After step k=$k")
            println("U="); printmatrix(U)
            println("L="); printmatrix(L)
        end;
    end;
    # The last row is also special: not much to do for L.
    L[end,end] = 1.0
    U[end,end] = A[end,end] - sum(L[[n],1:end-1] * U[1:end-1,end])
    if demomode
        println("After step k=$n")
        println("U="); printmatrix(U)
    end;
    return L, U
end;

A test case on LU factorization#

It will be useful to compute matrix norms as a measure or error; in particular the “maximum” or “infinity” norm of v is given by norm(v, Inf)

using LinearAlgebra: norm
A = [4.0 2.0 7.0; 3.0 5.0 -6.0; 1.0 -3.0 2.0]
printmatrix(A)
[ 4.0 2.0 7.0 
  3.0 5.0 -6.0 
  1.0 -3.0 2.0 ]
(L, U) = lu_factorize(A, demomode=true);
After step k=1
U=
[ 4.0 2.0 7.0 
  0.0 0.0 0.0 
  0.0 0.0 0.0 ]
L=
[ 1.0 0.0 0.0 
  0.75 0.0 0.0 
  0.25 0.0 0.0 ]
After step k=2
U=
[ 4.0 2.0 7.0 
  0.0 3.5 -11.25 
  0.0 0.0 0.0 ]
L=
[ 1.0 0.0 0.0 
  0.75 1.0 0.0 
  0.25 -1.0 0.0 ]
After step k=3
U=
[ 4.0 2.0 7.0 
  0.0 3.5 -11.25 
  0.0 0.0 -11.0 ]
println("A is"); printmatrix(A)
println("L is"); printmatrix(L)
println("U is"); printmatrix(U)
println("L times U is"); printmatrix(L*U)
println("The 'residual' or 'backward error' A - LU is"); printmatrix(A - L*U)
A is
[ 4.0 2.0 7.0 
  3.0 5.0 -6.0 
  1.0 -3.0 2.0 ]
L is
[ 1.0 0.0 0.0 
  0.75 1.0 0.0 
  0.25 -1.0 1.0 ]
U is
[ 4.0 2.0 7.0 
  0.0 3.5 -11.25 
  0.0 0.0 -11.0 ]
L times U is
[ 4.0 2.0 7.0 
  3.0 5.0 -6.0 
  1.0 -3.0 2.0 ]
The 'residual' or 'backward error' A - LU is
[ 0.0 0.0 0.0 
  0.0 0.0 0.0 
  0.0 0.0 0.0 ]

Forward substitution: solving \(Lc = b\) for \(c\)#

This is the last piece missing. The strategy is very similar to backward substitution, but slightly simplified by the ones on the main didogonal of \(L\). The equations \(L c = b\) can be written much as above, separating off the last term in the sum:

\[ \sum_{j=1}^{n} l_{i,j} c_j = b_i,\; 1 \leq i \leq n \]
\[ \sum_{j=1}^{i} l_{i,j} c_j = b_i,\; 1 \leq i \leq n \]
\[ \sum_{j=1}^{i-1} l_{i,j} c_j + c_i = b_i,\; 1 \leq i \leq n \]

Then solve for \(c_i\):

\[ c_i = b_i - \sum_{j=1}^{i-1} l_{i,j} c_j \]

These are already is usable order: the right-hand side in the equation for \(c_i\) involves only the \(c_j\) values with \(j < i\), determined by earlier equations if we run through index \(i\) in increasing order.

First, \(i=1\)

\[ c_1 = b_1 - \sum_{j=1}^{0} l_{1,j} c_j, = b_1 \]

Next, \(i=2\)

\[ c_2 = b_2 - \sum_{j=1}^{1} l_{2,j} c_j, = b_2 - l_{2,1}c_1 \]

Next, \(i=3\)

\[ c_3 = b_3 - \sum_{j=1}^{2} l_{3,j} c_j, = b_3 - l_{3,1} c_1 - l_{3,2} c_2 \]

See Exercise 1.

As usual, there is also an implementation available from module NumericalMethods, at forwardsubstitution, so this is used here. (It is not in the form asked for in the above exercise!)

using .NumericalMethods: forwardsubstitution

A test case on forward substitution#

b = [2.0, 3.0, 4.0];
c = forwardsubstitution(L, b)
print(c)
[2.0, 1.5, 5.0]
println("c = $c")
println("The residual b - Lc is $(b - L*c)")
println("\t with maximum norm $(norm(b - L*c,  Inf))")
c = [2.0, 1.5, 5.0]
The residual b - Lc is [0.0, 0.0, 0.0]
	 with maximum norm 0.0

Completing the test case, with backward substitution#

As this step is unchanged, we can just import the version seen in Row Reduction/Gaussian Elimination

using .NumericalMethods: backwardsubstitution
x = backwardsubstitution(U, c);
residual_cUx = c - U*x
println("The residual c - Ux for the backward substitution step is $residual_cUx")
println("\t with maximum norm $(norm(residual_cUx, Inf))")
residual_bAx = b - A*x
println("The residual b - Ax for the whole solving process is $residual_bAx")
println("\t with maximum norm $(norm(residual_bAx, Inf))")
The residual c - Ux for the backward substitution step is [0.0, -2.220446049250313e-16, 0.0]
	 with maximum norm 2.220446049250313e-16
The residual b - Ax for the whole solving process is [0.0, 0.0, 8.881784197001252e-16]
	 with maximum norm 8.881784197001252e-16

See Exercise 2.

As an example of creating and using a module, I am creating one for this course, NumericalMethods.jl; see Module NumericalMethods. For now these two modules will overlap, but your version will contain code that you create in exerices that is not in NumericalMethodsNumericalMethods.

When does LU factorization work?#

It was seen in the section Partial Pivoting that naive Gaussian elimination works (in the sense of avoiding division by zero) so one good result is that

Theorem 3.4

Any SDD matrix has a Doolittle factorization \(A=LU\), with the diagonal elements of \(U\) all non-zero, so backward substitution also works.

For any column-wise SDD matrix, this LU factorization exists and is also “optimal”, in the sense that it follows what you would do with maximal element partial pivoting.

This nice second property can be got for SDD matrices via a twist, or actually a transpose.

For an SDD matrix, it transpose \(B = A^T\) is column-wise SDD and so has the nice Doolitle factorization described above: \(B = L_B U_B\), with \(L_B\) being column-wise diagonally dominant and having ones on the main diagonal.

Transposing back, \(A = B^T = (L_B U_B)^T = U_B^T L_B^T\), and defining \(L = U_B^T\) and \(U = L_B^T\),

  • \(L\) is lower triangular

  • \(U\) is upper triangular, row-wise diagonally dominant and with ones on it main diagonal: it is “unit upper triangular”.

  • Thus \(LU\) is another LU factorization of \(A\), with \(U\) rather than \(L\) being the factor with ones on its main diagonal.

Crout decomposition#

This sort of \(LU\) factorization is called the Crout decomposition; as with the Doolittle version, if such a factorization exists, it is unique.

Theorem 3.5

Every SDD matrix has a Crout decomposition, and the factor \(U\) is SDD.

Remark 3.15

As was mentioned at the end of the section Row Reduction/Gaussian Elimination naive Gausion elminaor alwo worek for positive definite matrices,amnd thus so does th Doolittle LU factirozation. However, there is another LU factorization that works even better in that case, the Cholesky factorization; this topic might be returned to later.


Exercises#

Exercise 1#

A) Express the forward substitution strategy as pseudo-code; spell out all the sums in explicit rather than using ‘\(\Sigma\)’ notation for sums any matrix multiplication short-cut.

B) Then implement it “directly” in a Julia function, with format:

function forwardSubstitution(L, b)
    . . .
    return c

Again do this with explicit evaluation of each sum rather than using the function sum or any matrix multiplication short-cut.

C) Test it, using this often-useful “reverse-engineering” tactic:

  1. Create suitable test arrays L and c. (Use \(n\) at least three, and preferably larger.)

  2. Compute their product, with b = L * c

  3. Check if c_solution = forwardSubstitution(L, b) gives the correct value (within rounding error.)

Exercise 2#

(An ongoing activity.)

Start building a Julia module — I suggest the name MyNumericalMethods — in a file name by adding suffix “.jl” to the module name (e.g. MyNumericalMethods.jl). Put all the functions that you create as you work through this book; for now, just your version of forwardSubstitution(L, b), along with backwardSubstitution from a previous section and luFactorize from above.

The syntax of the module file is like this:

module NumericalMethods
function rowReduce(A, b)
    ...
end;
function forwardSubstitution(L, b)
    ...
end;
function backwardSubstitution(U, c)
    ...
end;
end