 Suppose that v_{1} and v_{2} are linearly independent eigenvectors of iteration matrix B, with corresponding eigenvalues λ_{1} and λ_{2}, and suppose that the initial error can be written as linear combination of these two vectors: e^{(0)} = c_{1}v_{1} + c_{2}v_{2}. Recall that the error after iteration k is e^{(k)} = Be^{(k1)}, so that
 Prove (by induction) that
 Show that if λ_{1} < 1 and λ_{2} < 1, then e^{(k)} 0 as k . Use the fact that
.
This shows that the initial error, and thus the initial guess, don't matter that much. What matters most is the eigenvalues of B, which tell you both whether the error will go to 0, and if so, how quickly. More generally, what matters is the size of B , which is directly related to the eigenvalues.
 In this problem, you will see that the order of the equations in a system can affect the performance of an iterative solution method.
 For the system of equations
use the applet to compute five iterations with both the Jacobi and GaussSeidel Methods, using initial value x(0) = (0.5, 0.5), and record your results. For each method, complete a table like the one below, which shows the first set of entries for Jacobi's Method. To better see the results, click the Default Axes button and then Zoom out a few times.
k 
x^{(k)} 
error^{(k)} 
1 
0.000000 
3.000000 
3.162278 
2 



3 



4 



5 



Also, find the eigenvalues of the iteration matrix B for each method, and use this information to explain why the approximations do not converge for either method.
 Now rearrange (swap) the two equations, so that the system has the form
,
and use the applet to compute five iterations, again making a table for each method. This time the approximations produced by the iterations should converge. Find the eigenvalues of the new iteration matrix B for each method, and use this information to explain why the approximations converge in this case. To better see the results, click the Default Axes button and then Zoom in a few times.
 In this problem you will investigate how whether a matrix is strictly diagonally dominant affects whether the Jacobi and/or GaussSeidel Methods work.
 Show that if a 2 x 2 coefficient matrix
is strictly diagonally dominant, then the eigenvalues of the iteration matrices B corresponding to the Jacobi and GaussSeidel Methods are of magnitude < 1 (which guarantees convergence, as you found in Exercise 7).
 In Exercise 8 you have two coefficient A matrices, one for each ordering of the equations. Comment on how the diagonal dominance (or not) of each A matrix relates to the convergence (or not) results you found in Exercise 8.
 Give a 2 x 2 system of equations in which the coefficient matrix A is strictly diagonally dominant, and compute the first five iterations to help verify that both methods produce approximations that are converging to the exact solution. Make a table similar to the one in Exercise 8. It doesn’t matter what you choose for b or for the initial guess x^{(0)}.
 Consider Jacobi’s Method for solving the system Ax = b, where
 Verify that the iteration matrix B corresponding to the Jacobi Method for this system is
,
and verify that the eigenvalues for this matrix are λ_{1} = 0.5 and λ_{1} = − 0.5 with corresponding eigenvectors v_{1} = (1, − 1) and v_{2} = (1, 1). (All you need to show is that Bv_{1} = λ_{1}v_{1} and Bv_{2} = λ_{2}v_{2}.)
 Suppose that b = (0, 0), so that the true solution is x = (0, 0). If we choose x^{(0)} = (5, − 1), then e^{(0)} = x − x^{(0)} = (− 5, 1). Verify that e^{(0)} = c_{1}v_{1} + c_{2}v_{2}, where c_{1} = − 3 and c_{1} = − 2, and where v_{1} = (1, − 1) and v_{2} = (1, 1) are the eigenvectors of B.
 Use the fact that e^{(k)} = c_{1}λ_{1}^{k} v_{1} + c_{2}λ_{2}^{k} v_{2} to show that
.
Compute the errors e^{(1)}, e^{(2)}, e^{(3)}, e^{(4)}, and e^{(5)} in five iterations of the Jacobi Method starting from x^{(0)} = (5, − 1), as in (b). Check your work by using the applet to find the first five iterations when using Jacobi's Method for this system and initial guess, and sketch or print the applet's graph of the five approximations.
 Now suppose that b = (3, 0) so that the true solution is x = (2, − 1). Suppose also that x^{(0)} = (7, − 2), so that we happen to have the same initial error, e^{(0)} = x − x^{(0)} = (− 5, 1), as in (b) and (c). Use the applet to find the first five iterations and corresponding errors e^{(1)}, e^{(2)}, e^{(3)}, e^{(4)}, and e^{(5)} for this new system. (The coefficient matrix A is the same, and thus the iteration matrix B is also the same  only the righthand side b is changed.) Again sketch or print the graph that shows the approximations. Compare your results here to your results in (c), both the actual errors e^{(1)}, e^{(2)}, e^{(3)}, e^{(4)}, and e^{(5)} and the graph of the results.
 For this same coefficient matrix A, suppose that any b and x^{(0)} are chosen so that the initial error is e^{(0)} = x − x^{(0)} = (− 5, 1), as in (b) − (d). What would you predict that the errors e^{(1)}, e^{(2)}, e^{(3)}, e^{(4)} , and e^{(5)} would be?
 Support your answer in (e) by finding any b and x^{(0)} such that e^{(0)} = (− 5, 1)  you choose b and x^{(0)} from the infinite number of possibilities  and then use the applet to compute the errors e^{(1)}, e^{(2)}, e^{(3)}, e^{(4)} , and e^{(5)} . Hint: First choose x  which will give you b, since Ax = b  and then choose x^{(0)} so that e^{(0)} = x − x^{(0)}, that is, x^{(0)} = x − (− 5, 1). Sketch or print the applet's graph of the five approximations, and compare this to your graphs from (c) and (d).
 Do Exercise 10 with the GaussSeidel Method instead of the Jacobi Method. You will need to find B and its eigenvalues and eigenvectors, as well as c_{1} and c_{2}.
 Each of the following systems has the same solution.
Consider solving each using Jacobi’s Method. Since the coefficient matrices A are different, the iteration matrices B in each case are different, and thus the convergence properties of Jacobi's Method in each case are different.
 Using x^{(0)} = (4, 1) for each system, use the applet to do five iterations. (You'll probably need to Zoom out to see the results for the third system.) For all three systems, record the error after each iteration, and describe what the graph in the applet is showing. (You may sketch or print the applet's graph and comment on what you see.)
 For each of the three systems, find the iteration matrix B and its eigenvalues, and use the eigenvalues to explain the results in (a). Note: the "circular" behavior of the Jacobi Method for the second system occurs because the eigenvalues of B for this system are complex.
 Do Exercise 12 with the GaussSeidel Method instead of Jacobi’s Method.
 The rate of convergence corresponds to how the error is reduced by performing another iteration. For this problem we will think of the rate of convergence as how much the error shrinks with each iteration  in other words, it is the ratio of the new error to the old error: e^{(k)} / e^{(k1)}. So, for example, if r = e^{(k)} / e^{(k1)} = 0.7, then e^{(k)} = 0.7e^{(k1)}. Obviously we would always want r < 1 in order for the error to → 0.
We will consider the convergence rate of Jacobi’s Method for solving
Recall that e^{(k)} = Be^{(k1)} ≈ λ_{max} e^{(k1)}, where λ_{max} is the largest (in magnitude) eigenvalue of iteration matrix B. In general (for larger systems), this approximate relationship often is not the case for the first few iterations, but the approximation often becomes more valid for later iterations. It turns out that for 2 x 2 systems, if both eigenvalues are equal in magnitude, λ_{1} = λ_{2}, then e^{(k)} = λ_{max} e^{(k1)} exactly and immediately, which also means that the rate of convergence at every step would be e^{(k)} / e^{(k1)} = λ_{max}. For
,
if a_{11} = a_{22} and a_{12} = a_{21}, it turns out that the two eigenvalues for the iterations matrix B for Jacobi's Method will be of the same size, as can be seen on page 7, Analysis of Jacobi and GS Methods.
 Find the two eigenvalues of the iteration matrix B corresponding to Jacobi’s Method for the 2 x 2 system described above.
 Verify the relationship e^{(k)} / e^{(k1)} = λ_{max} by completing a table like the one below (use the applet to do the work) and then comparing these results to the eigenvalues found in (a). Use x^{(0)} = (0, 0).
k 
e^{(k)} 
e^{(k)} / e^{(k1)} 
0 
1.414214 
− 
1 
0.942809 
0.666666 
2 


3 


4 


In general, what would e^{(k)} / e^{(k1)} be for
What conditions on a and b would guarantee convergence (that is, what would guarantee that e^{(k)} / e^{(k1)} < 1)? How does this relate to Exercise 8(a)?

Do Exercise 14 with the GaussSeidel Method instead of Jacobi’s Method.
 In this problem you will use the SOR Method to solve the 2 x 2 system
 Generally we choose a value for ω such that 1 < ω < 2. For each value of ω = 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, and 1.9, solve the system with x^{(0)} = (0, 0) as the initial guess, and record the error after k = 10 iterations. Use the applet to perform the iterations. (Be sure to click the Initial guess button after entering each new value of ω.) You do not need to write down the results at each step — this time we are interested only in the final result for each value of ω — complete a table like the one below. Based on your results, which value of ω seems to result in the fastest convergence? That is, which one gives the smallest error after 10 iterations? Note: this is not necessarily the optimal value of ω — it is simply the best choice among the nine values that we considered.
ω

error^{(10)} 
1.1 
0.000664 
1.2 

1.3 

1.4 

1.5 

1.6 

1.7 

1.8 

1.9 

 Repeat (a), except use x^{(0)} = (−20, 10) as your initial guess.
 Explain why the best value of ω is the same in both (a) and (b).
 For each initial guess, x^{(0)} = (0, 0) and x^{(0)} = (−20, 10), solve the system using the GaussSeidel Method (which is equivalent to the SOR Method for ω = 1) and for both cases record the error after 10 iterations. For which values of ω in part (a) does the SOR Method produce faster convergence results (that is, a smaller error after 10 iterations) than the GaussSeidel Method and for which values is the convergence worse?
 For this system, it turns out that for ω ≥ 2, the SOR Method will actually diverge. Again using the applet, do 50 iterations of the SOR Method using ω = 2 and ω = 2.05 and describe (or sketch/print and comment on) the results.
 In this problem you will continue exploring the SOR Method for solving the 2 x 2 system
 Find the B matrix for the SOR Method for this system (as a function of ω).
 The characteristic equation for this B matrix is
One could use the quadratic formula to find the two eigenvalues (as functions of ω) that satisfy det(B − λ I) = 0. It turns out, if the two eigenvalues are the same, then B = λ_{max} is minimized, which results in the fastest possible convergence for the SOR Method. The two eigenvalues will be the same when the discriminant b^{2} − 4ac = 0 in the characteristic equation. Ignoring the factor of 1/16 in front, we want to examine the discriminant of aλ^{2} + bλ + c, where a = 1, b = −(9ω^{2} − 32ω + 32), and c = 256(ω − 1)^{2} . In particular, we want to find where b^{2} = 4ac, that is,
.
We can simplify this last equation to 9ω^{2} − 64ω + 64 = 0. Find the value of ω that results in the two eigenvalues being the same by solving for ω in 9ω^{2} − 64ω + 64 = 0, keeping in mind that we want the value that satisfies 1 < ω < 2.
 If Exercise 16 was assigned, compare the value you just found in 17(b) to the result you found in Exercise 16 for an optimal value of ω [of the nine ω values in 16(a)].
 Warning: this problem is for the determined, patient, and stronghearted student. Consider the SOR Method for solving the general 2 x 2 system
 Find the B matrix and its eigenvalues (as functions of a_{11}, a_{12}, a_{21}, and a_{22} ).
 Find the value of ω that results in the two eigenvalues being the same. (This is the value of ω that minimizes B = λ_{max}.) This ω is a function of a_{11}, a_{12}, a_{21}, and a_{22}. What is the single eigenvalue (with algebraic multiplicity 2) in this case?
 For the ω found in (b), what conditions on a_{11}, a_{12}, a_{21}, and a_{22} guarantee that B = λ_{max} < 1?
David M. Strong, "Iterative Methods for Solving [i]Ax[/i] = [i]b[/i]  Exercises, Part 2: All Methods," Convergence (July 2005)