### Sample Chapter

**COMPLETE TEXT B00K SOLUTION WITH ANSWERS**

**ORDER WILL BE DELIVER WITHIN FEW HOURS**

**Solution Manual Of Linear Algebra with Applications 9th Edition By Steven J. Leon**

** **

**SAMPLE QUESTIONS**

**Linear Algebra with Applications**

** **

**Ninth Edition**

** **

** **

**Steven J. Leon**

** **

**University of Massachusetts, Dartmouth**

** **

**Chapter 1 ****Matrices and Systems of Equations**

SECTION 1.1 EXERCISES

- Use back substitution to solve each of the following

systems of equations:

**(a) ***x*1 − 3*x*2 = 2

2*x*2 = 6

**(b) ***x*1 + *x*2 + *x*3 = 8

2*x*2 + *x*3 = 5

3*x*3 = 9

**(c) ***x*1 + 2*x*2 + 2*x*3 + *x*4 = 5

3*x*2 + *x*3 − 2*x*4 = 1

−*x*3 + 2*x*4 = −1

4*x*4 = 4

**(d) ***x*1 + *x*2 + *x*3 + *x*4 + *x*5 = 5

2*x*2 + *x*3 − 2*x*4 + *x*5 = 1

4*x*3 + *x*4 − 2*x*5 = 1

*x*4 − 3*x*5 = 0

2*x*5 = 2

- Write out the coefficient matrix for each of the

systems in Exercise 1.

- In each of the following systems, interpret each

equation as a line in the plane. For each system,

graph the lines and determine geometrically the

number of solutions.

**(a) ***x*1 + *x*2 = 4

*x*1 − *x*2 = 2

**(b) ***x*1 + 2*x*2 = 4

−2*x*1 − 4*x*2 = 4

**(c) **2*x*1 − *x*2 = 3

−4*x*1 + 2*x*2 = −6

**(d) ***x*1 + *x*2 = 1

*x*1 − *x*2 = 1

−*x*1 + 3*x*2 = 3

- Write an augmented matrix for each of the systems

in Exercise 3.

- Write out the system of equations that corresponds

to each of the following augmented matrices:

**(a)**

⎧⎪⎩

3 2 8

1 5 7

⎫⎪⎭

**(b)**

⎧⎪⎩5

−

2

1

3

2 3 −4 0

⎫⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

2 1 4 −1

4 −2 3 4

5 2 6 −1

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

4 −3 1 2 4

3 1 −5 6 5

1 1 2 4 8

5 1 3 −2 7

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

- Solve each of the following systems.

**(a) ***x*1 − 2*x*2 = 5

3*x*1 + *x*2 = 1

**(b) **2*x*1 + *x*2 = 8

4*x*1 − 3*x*2 = 6

**(c) **4*x*1 + 3*x*2 = 4

2

3 *x*1 + 4*x*2 = 3

**(d) ***x*1 + 2*x*2 − *x*3 = 1

2*x*1 − *x*2 + *x*3 = 3

−*x*1 + 2*x*2 + 3*x*3 = 7

**(e) **2*x*1 + *x*2 + 3*x*3 = 1

4*x*1 + 3*x*2 + 5*x*3 = 1

6*x*1 + 5*x*2 + 5*x*3 = −3

**(f) **3*x*1 + 2*x*2 + *x*3 = 0

−2*x*1 + *x*2 − *x*3 = 2

2*x*1 − *x*2 + 2*x*3 = −1

**(g) **1

3 *x*1 + 2

3 *x*2 + 2*x*3 = −1

*x*1 + 2*x*2 + 3

2 *x*3 = 3

2

1

2 *x*1 + 2*x*2 + 12

5 *x*3 = 1

10

**(h) ***x*2 + *x*3 + *x*4 = 0

3*x*1 + 3*x*3 − 4*x*4 = 7

*x*1 + *x*2 + *x*3 + 2*x*4 = 6

2*x*1 + 3*x*2 + *x*3 + 3*x*4 = 6

- The two systems

2*x*1 + *x*2 = 3

4*x*1 + 3*x*2 = 5

and

2*x*1 + *x*2 = −1

4*x*1 + 3*x*2 = 1

have the same coefficient matrix but different righthand

sides. Solve both systems simultaneously by

eliminating the first entry in the second row of the

augmented matrix

⎧⎪⎩

2 1 3 −1

4 3 5 1

⎫⎪⎭

and then performing back substitutions for each of

the columns corresponding to the right-hand sides.

- Solve the two systems

*x*1 + 2*x*2 − 2*x*3 = 1

2*x*1 + 5*x*2 + *x*3 = 9

*x*1 + 3*x*2 + 4*x*3 = 9

*x*1 + 2*x*2 − 2*x*3 = 9

2*x*1 + 5*x*2 + *x*3 = 9

*x*1 + 3*x*2 + 4*x*3 = −2

by doing elimination on a 3 × 5 augmented matrix

and then performing two back substitutions.

- Given a system of the form

−*m*1*x*1 + *x*2 = *b*1

−*m*2*x*1 + *x*2 = *b*2

where *m*1, *m*2, *b*1, and *b*2 are constants:

**(a) **Show that the system will have a unique solution

if *m*1 _= *m*2.

**(b) **Show that if *m*1 = *m*2, then the system will be

consistent only if *b*1 = *b*2.

**(c) **Give a geometric interpretation of parts (a) and

(b).

- Consider a system of the form

*a*11*x*1 + *a*12*x*2 = 0

*a*21*x*1 + *a*22*x*2 = 0

where *a*11, *a*12, *a*21, and *a*22 are constants. Explain

why a system of this form must be consistent.

- Give a geometrical interpretation of a linear equation

in three unknowns. Give a geometrical description

of the possible solution sets for a 3 × 3 linear

system.

SECTION 1.2 EXERCISES

- Which of the matrices that follow are in row echelon

form? Which are in reduced row echelon

form?

**(a)**

⎧⎪⎩

1 2 3 4

0 0 1 2

⎫⎪⎭

**(b)**

⎧⎪⎪⎪⎪⎪⎩

1 0 0

0 0 0

0 0 1

⎫⎪⎪⎪⎪⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

1 3 0

0 0 1

0 0 0

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎩

0 1

0 0

0 0

⎫⎪⎪⎪⎪⎪⎭

**(e)**

⎧⎪⎪⎪⎪⎪⎩

1 1 1

0 1 2

0 0 3

⎫⎪⎪⎪⎪⎪⎭

**(f)**

⎧⎪⎪⎪⎪⎪⎩

1 4 6

0 0 1

0 1 3

⎫⎪⎪⎪⎪⎪⎭

**(g)**

⎧⎪⎪⎪⎪⎪⎩

1 0 0 1 2

0 1 0 2 4

0 0 1 3 6

⎫⎪⎪⎪⎪⎪⎭

**(h)**

⎧⎪⎪⎪⎪⎪⎩

0 1 3 4

0 0 1 3

0 0 0 0

⎫⎪⎪⎪⎪⎪⎭

- The augmented matrices that follow are in row

echelon form. For each case, indicate whether the

corresponding linear system is consistent. If the

system has a unique solution, find it.

**(a)**

⎧⎪⎪⎪⎪⎪⎩

1 2 4

0 1 3

0 0 1

⎫⎪⎪⎪⎪⎪⎭

**(b)**

⎧⎪⎪⎪⎪⎪⎩

1 3 1

0 1 −1

0 0 0

⎫⎪⎪⎪⎪⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

1 −2 4 1

0 0 1 3

0 0 0 0

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎩

1 −2 2 −2

0 1 −1 3

0 0 1 2

⎫⎪⎪⎪⎪⎪⎭

**(e)**

⎧⎪⎪⎪⎪⎪⎩

1 3 2 −2

0 0 1 4

0 0 0 1

⎫⎪⎪⎪⎪⎪⎭

**Chapter 1 ****Matrices and Systems of Equations**

**(f)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 −1 3 8

0 1 2 7

0 0 1 2

0 0 0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

- The augmented matrices that follow are in reduced

row echelon form. In each case, find the solution

set to the corresponding linear system.

**(a)**

⎧⎪⎪⎪⎪⎪⎩

1 0 0 −2

0 1 0 5

0 0 1 3

⎫⎪⎪⎪⎪⎪⎭

**(b)**

⎧⎪⎪⎪⎪⎪⎩

1 4 0 2

0 0 1 3

0 0 0 1

⎫⎪⎪⎪⎪⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

1 −3 0 2

0 0 1 −2

0 0 0 0

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎩

1 2 0 1 5

0 0 1 3 4

⎫⎪⎭

**(e)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 5 −2 0 3

0 0 0 1 6

0 0 0 0 0

0 0 0 0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(f)**

⎧⎪⎪⎪⎪⎪⎩

0 1 0 2

0 0 1 −1

0 0 0 0

⎫⎪⎪⎪⎪⎪⎭

- For each of the systems in Exercise 3, make a list

of the lead variables and a second list of the free

variables.

- For each of the systems of equations that follow,

use Gaussian elimination to obtain an equivalent

system whose coefficient matrix is in row echelon

form. Indicate whether the system is consistent.

If the system is consistent and involves no free

variables, use back substitution to find the unique

solution. If the system is consistent and there are

free variables, transform it to reduced row echelon

form and find all solutions.

**(a) ***x*1 − 2*x*2 = 3

2*x*1 − *x*2 = 9

**(b) **2*x*1 − 3*x*2 = 5

−4*x*1 + 6*x*2 = 8

**(c) ***x*1 + *x*2 = 0

2*x*1 + 3*x*2 = 0

3*x*1 − 2*x*2 = 0

**(d) **3*x*1+ 2*x*2− *x*3= 4

*x*1− 2*x*2+ 2*x*3= 1

11*x*1+ 2*x*2+ *x*3= 14

**(e) **2*x*1 + 3*x*2 + *x*3 = 1

*x*1 + *x*2 + *x*3 = 3

3*x*1 + 4*x*2 + 2*x*3 = 4

**(f) ***x*1 − *x*2 + 2*x*3 = 4

2*x*1 + 3*x*2 − *x*3 = 1

7*x*1 + 3*x*2 + 4*x*3 = 7

**(g) ***x*1+ *x*2+ *x*3+ *x*4= 0

2*x*1+ 3*x*2− *x*3− *x*4= 2

3*x*1+ 2*x*2+ *x*3+ *x*4= 5

3*x*1+ 6*x*2− *x*3− *x*4= 4

**(h) ***x*1 − 2*x*2 = 3

2*x*1 + *x*2 = 1

−5*x*1 + 8*x*2 = 4

**(i) **−*x*1 + 2*x*2 − *x*3 = 2

−2*x*1 + 2*x*2 + *x*3 = 4

3*x*1 + 2*x*2 + 2*x*3 = 5

−3*x*1 + 8*x*2 + 5*x*3 = 17

**(j) ***x*1+ 2*x*2− 3*x*3+ *x*4= 1

−*x*1− *x*2+ 4*x*3− *x*4= 6

−2*x*1− 4*x*2+ 7*x*3− *x*4= 1

**(k) ***x*1+ 3*x*2+ *x*3+ *x*4= 3

2*x*1− 2*x*2+ *x*3+ 2*x*4= 8

*x*1− 5*x*2 + *x*4= 5

**(l) ***x*1 − 3*x*2 + *x*3 = 1

2*x*1 + *x*2 − *x*3 = 2

*x*1 + 4*x*2 − 2*x*3 = 1

5*x*1 − 8*x*2 + 2*x*3 = 5

- Use Gauss–Jordan reduction to solve each of the

following systems.

**(a) ***x*1 + *x*2 = −1

4*x*1 − 3*x*2 = 3

**(b) ***x*1 + 3*x*2 + *x*3 + *x*4 = 3

2*x*1 − 2*x*2 + *x*3 + 2*x*4 = 8

3*x*1 + *x*2 + 2*x*3 − *x*4 = −1

**(c) ***x*1 + *x*2 + *x*3 = 0

*x*1 − *x*2 − *x*3 = 0

**(d) ***x*1 + *x*2 + *x*3 + *x*4 = 0

2*x*1 + *x*2 − *x*3 + 3*x*4 = 0

*x*1 − 2*x*2 + *x*3 + *x*4 = 0

- Give a geometric explanation of why a homogeneous

linear system consisting of two equations in

three unknowns must have infinitely many solutions.

What are the possible numbers of solutions

of a nonhomogeneous 2 × 3 linear system? Give a

geometric explanation of your answer.

- Consider a linear system whose augmented matrix

is of the form

⎧⎪⎪⎪⎪⎪⎪⎪⎩

1 2 1 1

−1 4 3 2

2 −2 *a *3

⎫⎪⎪⎪⎪⎪⎪⎪⎭

For what values of *a *will the system have a unique

solution?

- Consider a linear system whose augmented matrix

is of the form

⎧⎪⎪⎪⎪⎪⎪⎪⎩

1 2 1 0

2 5 3 0

−1 1 *β *0

⎫⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Is it possible for the system to be inconsistent?

Explain.

**(b) **For what values of *β *will the system have

infinitely many solutions?

- Consider a linear system whose augmented matrix

is of the form

⎧⎪⎪⎪⎪⎪⎪⎪⎩

1 1 3 2

1 2 4 3

1 3 *a b*

⎫⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **For what values of *a *and *b *will the system have

infinitely many solutions?

**(b) **For what values of *a *and *b *will the system be

inconsistent?

- Given the linear systems

**(i) ***x*1 + 2*x*2 = 2

3*x*1 + 7*x*2 = 8

**(ii) ***x*1 + 2*x*2 = 1

3*x*1 + 7*x*2 = 7

solve both systems by incorporating the right-hand

sides into a 2 × 2 matrix *B *and computing the

reduced row echelon form of

(*A*|*B*) =

⎧⎪⎪⎩

1 2 2 1

3 7 8 7

⎫⎪⎪⎭

- Given the linear systems

**(i) ***x*1 + 2*x*2 + *x*3 = 2

−*x*1 − *x*2 + 2*x*3 = 3

2*x*1 + 3*x*2 = 0

**(ii) ***x*1 + 2*x*2 + *x*3 = −1

−*x*1 − *x*2 + 2*x*3 = 2

2*x*1 + 3*x*2 = −2

solve both systems by computing the row echelon

form of an augmented matrix (*A*|*B*) and performing

back substitution twice.

- Given a homogeneous system of linear equations,

if the system is overdetermined, what are

the possibilities as to the number of solutions?

Explain.

- Given a nonhomogeneous system of linear equations,

if the system is underdetermined, what are

the possibilities as to the number of solutions?

Explain.

- Determine the values of
*x*1,*x*2,*x*3,*x*4 for the

following traffic flow diagram.

*x*4

*x*1

*x*2

*x*3

380

430 450

540 400

420 470

420

- Consider the traffic flow diagram that follows,

where *a*1, *a*2, *a*3, *a*4, *b*1, *b*2, *b*3, *b*4 are fixed positive

integers. Set up a linear system in the unknowns

*x*1, *x*2, *x*3, *x*4 and show that the system will be

consistent if and only if

*a*1 + *a*2 + *a*3 + *a*4 = *b*1 + *b*2 + *b*3 + *b*4

What can you conclude about the number of

automobiles entering and leaving the traffic

network?

- Let (
*c*1,*c*2) be a solution of the 2 × 2 system

*a*11*x*1 + *a*12*x*2 = 0

*a*21*x*1 + *a*22*x*2 = 0

Show that for any real number *α *the ordered pair

(*α**c*1, *α**c*2) is also a solution.

- In Application 3 the solution (6, 6, 6, 1) was obtained

by setting the free variable *x*4 = 1.

**(a) **Determine the solution corresponding to *x*4 =

- What information, if any, does this solution

give about the chemical reaction? Is the term

“trivial solution” appropriate in this case?

**(b) **Choose some other values of *x*4, such as 2, 4, or

5, and determine the corresponding solutions.

How are these nontrivial solutions related?

- Liquid benzene burns in the atmosphere. If a cold

object is placed directly over the benzene, water

will condense on the object and a deposit of soot

(carbon) will also form on the object. The chemical

equation for this reaction is of the form

*x*1C6H6 + *x*2O2 → *x*3C + *x*4H2O

Determine values of *x*1, *x*2, *x*3, and *x*4 to balance the

equation.

- Nitric acid is prepared commercially by a series of

three chemical reactions. In the first reaction, nitrogen

(N2) is combined with hydrogen (H2) to form

ammonia (NH3). Next the ammonia is combined

with oxygen (O2) to form nitrogen dioxide (NO2)

and water. Finally, the NO2 reacts with some of the

water to form nitric acid (HNO3) and nitric oxide

(NO). The amounts of each of the components of

these reactions are measured in moles (a standard

unit of measurement for chemical reactions). How

many moles of nitrogen, hydrogen, and oxygen are

necessary to produce 8 moles of nitric acid?

- In Application 4, determine the relative values of

*x*1, *x*2, and *x*3 if the distribution of goods is as

described in the following table.

*F M C*

*F *1

3

1

3

1

3

*M *1

3

1

2

1

6

*C *1

3

1

6

1

2

- Determine the amount of each current for the following

networks:

**(a)**

*A B*

3 ohms

16 volts

2 ohms

2 ohms

*i*1

*i*2

*i*3

**(b)**

*A B*

2 ohms

20 volts 4 ohms

2 ohms *i*1

*i*2

*i*3

**(c)**

*A B*

4 ohms

8 volts

4 ohms

5 ohms

2 ohms

*C D*

*i*1

*i*3 *i*4

10 volts

*i*6

SECTION 1.3 EXERCISES

- If

*A *=

⎧⎪⎪⎪⎪⎪⎩

3 1 4

−2 0 1

1 2 2

⎫⎪⎪⎪⎪⎪⎭

and *B *=

⎧⎪⎪⎪⎪⎪⎩

1 0 2

−3 1 1

2 −4 1

⎫⎪⎪⎪⎪⎪⎭

compute

**(a) **2*A ***(b) ***A *+ *B*

**(c) **2*A *− 3*B ***(d) **(2*A*)*T *− (3*B*)*T*

**(e) ***AB ***(f) ***BA*

**(g) ***A**T**B**T ***(h) **(*BA*)*T*

- For each of the pairs of matrices that follow, determine

whether it is possible to multiply the first

matrix times the second. If it is possible, perform

the multiplication.

**(a)**

⎧⎪⎩

3 5 1

−2 0 2

⎫⎪⎭

⎧⎪⎪⎪⎪⎪⎩

2 1

1 3

4 1

⎫⎪⎪⎪⎪⎪⎭ **(**

**b)**

⎧⎪⎪⎪⎪⎪⎩

4 −2

6 −4

8 −6

⎫⎪⎪⎪⎪⎪⎭

⎧⎩

1 2 3

⎫⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

1 4 3

0 1 4

0 0 2

⎫⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎩

3 2

1 1

4 5

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎩

4 6

2 1

⎫⎪⎭ ⎧⎪⎩

3 1 5

4 1 6

⎫⎪

⎭

**(e)**

⎧⎪⎩4

6

1

2 1 1

⎫⎪⎭

⎧⎪⎩

3 1 5

4 1 6

⎫⎪⎭

**(f)**

⎧⎪⎪⎪⎪⎪⎩

2

−1

3

⎫⎪⎪⎪⎪⎪⎭

⎧⎩

3 2 4 5

⎫⎭

- For which of the pairs in Exercise 2 is it possible to

multiply the second matrix times the first, and what

would the dimension of the product matrix be?

- Write each of the following systems of equations

as a matrix equation:

**(a) **3*x*1 + 2*x*2 = 1

2*x*1 − 3*x*2 = 5

**(b) ***x*1 + *x*2 = 5

2*x*1 + *x*2 − *x*3 = 6

3*x*1 − 2*x*2 + 2*x*3 = 7

**(c) **2*x*1 + *x*2 + *x*3 = 4

*x*1 − *x*2 + 2*x*3 = 2

3*x*1 − 2*x*2 − *x*3 = 0

- If

*A *=

⎧⎪⎪⎪⎪⎪⎩

3 4

1 1

2 7

⎫⎪⎪⎪⎪⎪⎭

verify that

**(a) **5*A *= 3*A *+ 2*A ***(b) **6*A *= 3(2*A*)

**(c) **(*A**T *)*T *= *A*

- If

*A *=

⎧⎪⎩

4 1 6

2 3 5

⎫⎪⎭

and *B *=

⎧⎪⎩

1 3 0

−2 2 −4

⎫⎪⎭

verify that

**(a) ***A *+ *B *= *B *+ *A*

**(b) **3(*A *+ *B*) = 3*A *+ 3*B*

**(c) **(*A *+ *B*)*T *= *A**T *+ *B**T*

- If

*A *=

⎧⎪⎪⎪⎪⎪⎩

2 1

6 3

−2 4

⎫⎪⎪⎪⎪⎪⎭

and *B *=

⎧⎪⎩

2 4

1 6

⎫⎪⎭

verify that

**(a) **3(*AB*) = (3*A*)*B *= *A*(3*B*),

**(b) **(*AB*)*T *= *B**T**A**T*

- If

*A *=

⎧⎪⎩

2 4

1 3

⎫⎪⎭

, *B *=

⎧⎪⎩

−2 1

0 4

⎫⎪⎭

, *C *=

⎧⎪⎩

3 1

2 1

⎫⎪⎭

verify that

**(a) **(*A *+ *B*) + *C *= *A *+ (*B *+ *C*)

**(b) **(*AB*)*C *= *A*(*BC*)

**(c) ***A*(*B *+ *C*) = *AB *+ *AC*

**(d) **(*A *+ *B*)*C *= *AC *+ *BC*

- Let

*A *=

⎧⎪⎩

1 2

1 −2

⎫⎪⎭

, **b **=

⎧⎪⎩

4

0

⎫⎪⎭

, **c **=

⎧⎪⎩

−3

−2

⎫⎪⎭

**(a) **Write **b **as a linear combination of the column

vectors **a**1 and **a**2.

**(b) **Use the result from part (a) to determine a solution

of the linear system *A***x **= **b**. Does the

system have any other solutions? Explain.

**(c) **Write **c **as a linear combination of the column

vectors **a**1 and **a**2.

- For each of the choices of
*A*and**b**that follow, determine

whether the system *A***x **= **b **is consistent

by examining how **b **relates to the column vectors

of *A*. Explain your answers in each case.

**(a) ***A *=

⎧⎪⎩

2 1

−2 −1

⎫⎪⎭

, **b **=

⎧⎪⎩

3

1

⎫⎪⎭

**(b) ***A *=

⎧⎪⎩

1 4

2 3

⎫⎪⎭

, **b **=

⎧⎪⎩

5

5

⎫⎪⎭

**(c) ***A *=

⎧⎪⎪⎪⎪⎪⎩

3 2 1

3 2 1

3 2 1

⎫⎪⎪⎪⎪⎪⎭

, **b **=

⎧⎪⎪⎪⎪⎪⎩

1

0

−1

⎫⎪⎪⎪⎪

⎪⎭

- Let
*A*be a 5 × 3 matrix. If

**b **= **a**1 + **a**2 = **a**2 + **a**3

then what can you conclude about the number of

solutions of the linear system *A***x **= **b**? Explain.

- Let
*A*be a 3 × 4 matrix. If

**b **= **a**1 + **a**2 + **a**3 + **a**4

then what can you conclude about the number of

solutions to the linear system *A***x **= **b**? Explain.

- Let
*A***x**=**b**be a linear system whose augmented

matrix (*A*|**b**) has reduced row echelon form

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 2 0 3 1 −2

0 0 1 2 4 5

0 0 0 0 0 0

0 0 0 0 0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Find all solutions to the system.

**(b) **If

**a**1 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

1

3

4

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

and **a**3 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

2

−1

1

3

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

determine **b**.

- Suppose in the search and screen example in

Application 2 the committee decides that research

is actually 1.5 times as important as teaching and 3

times as important as professional activities. The

committee still rates teaching twice as important

as professional activities. Determine a new weight

vector **w **that reflects these revised priorities. Determine

also a new rating vector **r**. Will the new

weights have any effect on the overall rankings of

the candidates?

- Let
*A*be an*m*×*n*matrix. Explain why the matrix

multiplications *A**T**A *and *AA**T *are possible.

- A matrix
*A*is said to be*skew symmetric*if

*A**T *= −*A*. Show that if a matrix is skew symmetric,

then its diagonal entries must all be 0.

- In Application 3, suppose that we are searching

the database of seven linear algebra books for the

search words *elementary, matrix, algebra*. Form a

search vector **x**, and then compute a vector **y **that

represents the results of the search. Explain the

significance of the entries of the vector **y**.

- Let
*A*be a 2 × 2 matrix with*a*11 _= 0 and let

*α *= *a*21*/**a*11. Show that *A *can be factored into a

product of the form

⎧⎪⎩

1 0

*α *1

⎫⎪⎭

⎧⎪⎩

*a*11 *a*12

0 *b*

⎫⎪⎭

What is the value of *b*?

SECTION 1.4 EXERCISES

- Explain why each of the following algebraic rules

will not work in general when the real numbers *a*

and *b *are replaced by *n *× *n *matrices *A *and *B*.

**(a) **(*a *+ *b*)2 = *a*2 + 2*ab *+ *b*2

**(b) **(*a *+ *b*)(*a *− *b*) = *a*2 − *b*2

- Will the rules in Exercise 1 work if
*a*is replaced by

an *n *× *n *matrix *A *and *b *is replaced by the *n *× *n*

identity matrix *I*?

- Find nonzero 2 × 2 matrices
*A*and*B*such that

*AB *= *O*.

- Find nonzero matrices
*A*,*B*, and*C*such that

*AC *= *BC *and *A *_= *B*

- The matrix

*A *=

⎧⎪⎩

1 −1

1 −1

⎫⎪⎭

has the property that *A*2 = *O*. Is it possible for

a nonzero symmetric 2 × 2 matrix to have this

property? Prove your answer.

- Prove the associative law of multiplication for 2×2

matrices; that is, let

*A *=

⎧⎪⎩

*a*11 *a*12

*a*21 *a*22

⎫⎪⎭

, *B *=

⎧⎪⎩

*b*11 *b*12

*b*21 *b*22

⎫⎪⎭

,

*C *=

⎧⎪⎩

*c*11 *c*12

*c*21 *c*22

⎫⎪⎭

and show that

(*AB*)*C *= *A*(*BC*)

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

1

2

−1

2

−1

2

1

2

⎫⎪⎪⎪⎪⎪⎭

Compute *A*2 and *A*3 . What will *A**n *turn out to be?

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

2

−1

2

−1

2

−1

2

−1

2

1

2

−1

2

−1

2

−1

2

−1

2

1

2

−1

2

−1

2

−1

2

−1

2

1

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

Compute *A*2 and *A*3. What will *A*2*n *and *A*2*n*+1 turn

out to be?

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

0 1 0 0

0 0 1 0

0 0 0 1

0 0 0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

Show that *A**n *= *O *for *n *≥ 4.

- Let
*A*and*B*be symmetric*n*×*n*matrices. For each

of the following, determine whether the given matrix

must be symmetric or could be nonsymmetric:

**(a) ***C *= *A *+ *B ***(b) ***D *= *A*2

**(c) ***E *= *AB ***(d) ***F *= *ABA*

**(e) ***G *= *AB *+ *BA ***(f) ***H *= *AB *− *BA*

- Let
*C*be a nonsymmetric*n*×*n*matrix. For each of

the following, determine whether the given matrix

must necessarily be symmetric or could possibly be

nonsymmetric:

**(a) ***A *= *C *+ *C**T ***(b) ***B *= *C *− *C**T*

**(c) ***D *= *C**T**C ***(d) ***E *= *C**T**C *− *CC**T*

**(e) ***F *= (*I *+ *C*)(*I *+ *C**T *)

**(f) ***G *= (*I *+ *C*)(*I *− *C**T *)

- Let

*A *=

⎧⎪⎩

*a*11 *a*12

*a*21 *a*22

⎫⎪⎭

Show that if *d *= *a*11*a*22 − *a*21*a*12 _= 0, then

*A*−1 = 1

*d*

⎧⎪⎩

*a*22 −*a*12

−*a*21 *a*11

⎫⎪⎭

- Use the result from Exercise 12 to find the inverse

of each of the following matrices:

**(a)**

⎧⎪⎩

7 2

3 1

⎫⎪⎭

**(b)**

⎧⎪⎩

3 5

2 3

⎫⎪⎭

**(c)**

⎧⎪⎩

4 3

2 2

⎫⎪⎭

- Let
*A*and*B*are*n*×*n*matrices. Show that if

*AB *= *A *and *B *_= *I*

then *A *must be singular.

- Let
*A*be a nonsingular matrix. Show that*A*−1 is

also nonsingular and (*A*−1)−1 = *A*.

- Prove that if
*A*is nonsingular then*A**T*is nonsingular

and

(*A**T *)−1 = (*A*−1)*T*

*Hint*: (*AB*)*T *= *B**T**A**T *.

- Let
*A*be an*n*×*n*matrix and let**x**and**y**be vectors

in R*n*. Show that if *A***x **= *A***y **and **x **_= **y**, then the

matrix *A *must be singular.

- Let
*A*be a nonsingular*n*×*n*matrix. Use mathematical

induction to prove that *A**m *is nonsingular and

(*A**m*)−1 = (*A*−1)*m*

for *m *= 1, 2, 3, *. . . *.

- Let
*A*be an*n*×*n*matrix. Show that if*A*2 =*O*, then

*I *− *A *is nonsingular and (*I *− *A*)−1 = *I *+ *A*.

- Let
*A*be an*n*×*n*matrix. Show that if*A**k*+1 =*O*,

then *I *− *A *is nonsingular and

(*I *− *A*)−1 = *I *+ *A *+ *A*2 +· · ·+*A**k*

- Given

*R *=

⎧⎪⎩

cos *θ *−sin *θ*

sin *θ *cos *θ*

⎫⎪⎭

show that *R *is nonsingular and *R*−1 = *R**T *.

- An
*n*×*n*matrix*A*is said to be an*involution*if

*A*2 = *I*. Show that if *G *is any matrix of the form

*G *=

⎧⎪⎩

cos *θ *sin *θ*

sin *θ *−cos *θ*

⎫⎪⎭

then *G *is an involution.

- Let
**u**be a unit vector in R*n*(i.e.,**u***T***u**= 1) and let

*H *= *I *− 2**uu***T *. Show that *H *is an involution.

- Amatrix
*A*is said to be*idempotent*if*A*2 =*A*. Show

that each of the following matrices are idempotent.

**(a)**

⎧⎪⎪⎪⎩

1 0

1 0

⎫⎪⎪⎪⎭

**(b)**

⎧⎪⎪⎪⎪⎩

2

3

1

3

2

3

1

3

⎫⎪⎪⎪⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

4

1

4

1

4

1

4

1

4

1

4

1

2

1

2

1

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

- Let
*A*be an idempotent matrix.

**(a) **Show that *I *− *A *is also idempotent.

**(b) **Show that *I *+ *A *is nonsingular and

(*I *+ *A*)−1 = *I *− 1

2*A*

- Let
*D*be an*n*×*n*diagonal matrix whose diagonal

entries are either 0 or 1.

**(a) **Show that *D *is idempotent.

**(b) **Show that if *X *is a nonsingular matrix and

*A *= *XDX*−1, then *A *is idempotent.

- Let
*A*be an involution matrix and let

*B *= 1

2

(*I *+ *A*) and *C *= 1

2

(*I *− *A*)

Show that *B *and *C *are both idempotent and

*BC *= *O*.

- Let
*A*be an*m*×*n*matrix. Show that*A**T**A*and*AA**T*

are both symmetric.

- Let
*A*and*B*be symmetric*n*×*n*matrices. Prove

that *AB *= *BA *if and only if *AB *is also symmetric.

- Let
*A*be an*n*×*n*matrix and let

*B *= *A *+ *A**T *and *C *= *A *− *A**T*

**(a) **Show that *B *is symmetric and *C *is skew

symmetric.

**(b) **Show that every *n *× *n *matrix can be represented

as a sum of a symmetric matrix and a

skew-symmetric matrix.

- In Application 1, how many married women and

howmany single women will there be after 3 years?

- Consider the matrix

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

0 1 0 1 1

1 0 1 1 0

0 1 0 0 1

1 1 0 0 1

1 0 1 1 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭ **(**

**a)**Draw a

graph that has *A*

as its adjacency

matrix. Be sure to label the vertices of the

graph.

**(b) **By inspecting the graph, determine the number

of walks of length 2 from *V*2 to *V*3 and from *V*2

to *V*5.

**(c) **Compute the second row of *A*3 and use it to determine

the number of walks of length 3 from

*V*2 to *V*3 and from *V*2 to *V*5.

- Consider the graph

*V*2 *V*3

*V*5

*V*1 *V*4

**(a) **Determine the adjacency matrix *A *of the

graph.

**(b) **Compute *A*2. What do the entries in the first

row of *A*2 tell you about walks of length 2 that

start from *V*1?

**(c) **Compute *A*3. How many walks of length 3

are there from *V*2 to *V*4? How many walks of

length less than or equal to 3 are there from *V*2

to *V*4?

*For each of the conditional statements that follow,*

*answer *true *if the statement is always true and*

*answer *false *otherwise. In the case of a true statement,*

*explain or prove your answer. In the case of*

*a false statement, give an example to show that the*

*statement is not always true.*

- If
*A***x**=*B***x**for some nonzero vector**x**, then the

matrices *A *and *B *must be equal.

- If
*A*and*B*are singular*n*×*n*matrices, then*A*+*B*

is also singular.

- If
*A*and*B*are nonsingular matrices, then (*AB*)*T*is

nonsingular and

((*AB*)*T *)−1 = (*A*−1)*T *(*B*−1)*T*

SECTION 1.5 EXERCISES

- Which of the matrices that follow are elementary

matrices? Classify each elementary matrix by type.

**(a)**

⎧⎪⎩

0 1

1 0

⎫⎪⎭

**(b)**

⎧⎪⎩

2 0

0 3

⎫⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

1 0 0

0 1 0

5 0 1

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎩

1 0 0

0 5 0

0 0 1

⎫⎪⎪⎪⎪⎪⎭

- Find the inverse of each matrix in Exercise 1. For

each elementary matrix, verify that its inverse is an

elementary matrix of the same type.

- For each of the following pairs of matrices, find an

elementary matrix *E *such that *EA *= *B*.

**(a) ***A *=

⎧⎪⎩

2 −1

5 3

⎫⎪⎭

, *B *=

⎧⎪⎩

−4 2

5 3

⎫⎪⎭

**(b) ***A *=

⎧⎪⎪⎪⎪⎪⎩

2 1 3

−2 4 5

3 1 4

⎫⎪⎪⎪⎪⎪⎭

, *B *=

⎧⎪⎪⎪⎪⎪⎩

2 1 3

3 1 4

−2 4 5

⎫⎪⎪⎪⎪⎪⎭

**(c) ***A *=

⎧⎪⎪⎪⎪⎪⎩

4 −2 3

1 0 2

−2 3 1

⎫⎪⎪⎪⎪⎪⎭

, *B *=

⎧⎪⎪⎪⎪⎪⎩

4 −2 3

1 0 2

0 3 5

⎫⎪⎪⎪⎪⎪⎭

- For each of the following pairs of matrices, find an

elementary matrix *E *such that *AE *= *B*.

**(a) ***A *=

⎧⎪⎪⎪⎪⎪⎩

4 1 3

2 1 4

1 3 2

⎫⎪⎪⎪⎪⎪⎭

, *B *=

⎧⎪⎪⎪

⎪⎪⎩

3 1 4

4 1 2

2 3 1

⎫⎪⎪⎪⎪⎪⎭

**(b) ***A *=

⎧⎪⎩

2 4

1 6

⎫⎪⎭

, *B *=

⎧⎪⎩

2 −2

1 3

⎫⎪⎭

**(c) ***A *=

⎧⎪⎪⎪⎪⎪⎩

4 −2 3

−2 4 2

6 1 −2

⎫⎪⎪⎪⎪⎪⎭

,

*B *=

⎧⎪⎪⎪⎪⎪⎩

2 −2 3

−1 4 2

3 1 −2

⎫⎪⎪⎪⎪⎪⎭

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

1 2 4

2 1 3

1 0 2

⎫⎪⎪⎪⎪⎪⎭

, *B *=

⎧⎪⎪⎪⎪⎪⎩

1 2 4

2 1 3

2 2 6

⎫⎪⎪⎪⎪⎪⎭

,

*C *=

⎧⎪⎪⎪⎪⎪⎩

1 2 4

0 −1 −3

2 2 6

⎫⎪⎪⎪⎪⎪⎭

**(a) **Find an elementary matrix *E *such that

*EA *= *B*.

**(b) **Find an elementary matrix *F *such that

*FB *= *C*.

**(c) **Is *C *row equivalent to *A*? Explain.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

2 1 1

6 4 5

4 1 3

⎫⎪⎪⎪⎪⎪⎭

**(a) **Find elementary matrices *E*1, *E*2, *E*3 such that

*E*3*E*2*E*1*A *= *U*

where *U *is an upper triangular matrix.

**(b) **Determine the inverses of *E*1, *E*2, *E*3 and set

*L *= *E*−1

1 *E*−1

2 *E*−1

3 . What type of matrix is *L*?

Verify that *A *= *LU*.

- Let

*A *=

⎧⎪⎩

2 1

6 4

⎫⎪⎭ **(**

**a)**Express*A*

−

1

as a

product of elementary

matrices.

**(b) **Express *A *as a product of elementary matrices.

- Compute the LU factorization of each of the following

matrices.

**(a)**

⎧⎪⎩

3 1

9 5

⎫⎪⎭

**(b)**

⎧⎪⎩

2 4

−2 1

⎫⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

1 1 1

3 5 6

−2 2 7

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎩

−2 1 2

4 1 −2

−6 −3 4

⎫⎪⎪⎪⎪⎪⎭

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

1 0 1

3 3 4

2 2 3

⎫⎪⎪⎪⎪⎪⎭

**(a) **Verify that

*A*−1 =

⎧⎪⎪⎪⎪⎪⎩

1 2 −3

−1 1 −1

0 −2 3

⎫⎪⎪⎪⎪⎪⎭

**(b) **Use *A*−1 to solve *A***x **= **b **for the following

choices of **b**.

**(i) b **= (1, 1, 1)*T ***(ii) b **= (1, 2, 3)*T*

**(iii) b **= (−2, 1, 0)*T*

- Find the inverse of each of the following matrices.

**(a)**

⎧⎪⎩

−1 1

1 0

⎫⎪⎭

**(b)**

⎧⎪⎩

2 5

1 3

⎫⎪⎭ **(**

**c)**

⎧⎪⎩

2 6

3 8

⎫⎪⎭

**(d)**

⎧⎪⎩

3 0

9 3

⎫⎪⎭

**(e)**

⎧⎪⎪⎪⎪⎪⎩

1 1 1

0 1 1

0 0 1

⎫⎪⎪⎪⎪⎪⎭

**(f)**

⎧⎪⎪⎪⎪⎪⎩

2 0 5

0 3 0

1 0 3

⎫⎪⎪⎪⎪⎪⎭

**(g)**

⎧⎪⎪⎪⎪⎪⎩

−1 −3 −3

2 6 1

3 8 3

⎫⎪⎪⎪⎪⎪⎭

**(h)**

⎧⎪⎪⎪⎪⎪⎩

1 0 1

−1 1 1

−1 −2 −3

⎫

⎪⎪⎪⎪⎪⎭

- Given

*A *=

⎧⎪⎩

3 1

5 2

⎫⎪⎭

and *B *=

⎧⎪⎩

1 2

3 4

⎫⎪⎭

compute *A*−1 and use it to:

**(a) **Find a 2 × 2 matrix *X *such that *AX *= *B*.

**(b) **Find a 2 × 2 matrix *Y *such that *YA *= *B*.

- Let

*A *=

⎧⎪⎩

5 3

3 2

⎫⎪⎭

, *B *=

⎧⎪⎩

6 2

2 4

⎫⎪⎭

, *C *=

⎧⎪⎩

4 −2

−6 3

⎫⎪⎭

Solve each of the following matrix equations.

**(a) ***AX *+ *B *= *C ***(b) ***XA *+ *B *= *C*

**(c) ***AX *+ *B *= *X ***(d) ***XA *+ *C *= *X*

- Is the transpose of an elementary matrix an elementary

matrix of the same type? Is the product of

two elementary matrices an elementary matrix?

- Let
*U*and*R*be*n*×*n*upper triangular matrices and

set *T *= *UR*. Show that *T *is also upper triangular

and that *t**jj *= *u**jj**r**jj *for *j *= 1, *. . . *, *n*.

- Let
*A*be a 3 × 3 matrix and suppose that

2**a**1 + **a**2 − 4**a**3 = **0**

How many solutions will the system *A***x **= **0 **have?

Explain. Is *A *nonsingular? Explain.

- Let
*A*be a 3 × 3 matrix and suppose that

**a**1 = 3**a**2 − 2**a**3

Will the system *A***x **= **0 **have a nontrivial solution?

Is *A *nonsingular? Explain your answers.

- Let
*A*and*B*be*n*×*n*matrices and let*C*=*A*−*B*.

Show that if *A***x**0 = *B***x**0 and **x**0 _= **0**, then *C *must

be singular.

- Let
*A*and*B*be*n*×*n*matrices and let*C*=*AB*.

Prove that if *B *is singular then *C *must be singular.

*Hint*: Use Theorem 1.5.2.

- Let
*U*be an*n*×*n*upper triangular matrix with

nonzero diagonal entries.

**(a) **Explain why *U *must be nonsingular.

**(b) **Explain why *U*−1 must be upper triangular.

- Let
*A*be a nonsingular*n*×*n*matrix and let*B*be

an *n*×*r *matrix. Show that the reduced row echelon

form of (*A*|*B*) is (*I*|*C*), where *C *= *A*−1*B*.

- In general, matrix multiplication is not commutative

(i.e., *AB *_= *BA*). However, in certain special

cases the commutative property does hold. Show

that

**(a) **if *D*1 and *D*2 are *n *× *n *diagonal matrices, then

*D*1*D*2 = *D*2*D*1.

**(b) **if *A *is an *n *× *n *matrix and

*B *= *a*0*I *+ *a*1*A *+ *a*2*A*2 +· · ·+*a**k**A**k*

where *a*0, *a*1, *. . . *, *a**k *are scalars, then *AB *= *BA*.

- Show that if
*A*is a symmetric nonsingular matrix

then *A*−1 is also symmetric.

- Prove that if
*A*is row equivalent to*B*then*B*is row

equivalent to *A*.

**(a)**Prove that if*A*is row equivalent to*B*and*B*is

row equivalent to *C*, then *A *is row equivalent

to *C*.

**(b) **Prove that any two nonsingular *n *× *n *matrices

are row equivalent.

- Let
*A*and*B*be an*m*×*n*matrices. Prove that if*B*is

row equivalent to *A *and *U *is any row echelon form

of *A*, then *B *is row equivalent to *U*.

- Prove that
*B*is row equivalent to*A*if and only

if there exists a nonsingular matrix *M *such that

*B *= *MA*.

- Is it possible for a singular matrix
*B*to be row

equivalent to a nonsingular matrix *A*? Explain.

- Given a vector
**x**∈ R*n*+1, the (*n*+1)×(*n*+1) matrix

*V *defined by

*v**ij *=

_

1 if*j *= 1

*x**j*−1

*i *for *j *= 2, *. . . *, *n *+ 1

is called the Vandermonde matrix.

**(a) **Show that if

*V***c **= **y**

and

*p*(*x*) = *c*1 + *c*2*x*+· · ·+*c**n*+1*x**n*

then

*p*(*x**i*) = *y**i*, *i *= 1, 2, *. . . *, *n *+ 1

**(b) **Suppose that *x*1, *x*2, *. . . *, *x**n*+1 are all distinct.

Show that if **c **is a solution of *V***x **= **0 **then

the coefficients *c*1, *c*2, *. . . *, *c**n *must all be zero,

and hence *V *must be nonsingular.

*For each of following, answer *true *if the statement*

*is always true and answer *false *otherwise. In the*

*case of a true statement, explain or prove your*

*answer. In the case of a false statement, give an*

*example to show that the statement is not always*

*true.*

- If
*A*is row equivalent to*I*and*AB*=*AC*, then*B*

must equal *C*.

- If
*E*and*F*are elementary matrices and*G*=*EF*,

then *G *is nonsingular.

- If
*A*is a 4 × 4 matrix and**a**1 +**a**2 =**a**3 + 2**a**4, then

*A *must be singular.

- If
*A*is row equivalent to both*B*and*C*, then*A*is

row equivalent to *B *+ *C*.

SECTION 1.6 EXERCISES

- Let
*A*be a nonsingular*n*×*n*matrix. Perform the

following multiplications:

**(a) ***A*−1

⎧⎩

*A I*

⎫⎭

**(b)**

⎧⎪⎩

*A*

*I*

⎫⎪⎭

*A*−1

**(c)**

⎧⎩

*A I*

⎫⎭

*T*⎧⎩

*A I*

⎫⎭

**(d)**

⎧⎩

*A I*

⎫⎭

⎧⎩

*A I*

⎫⎭

*T*

**(e)**

⎧⎪⎩

*A*−1

*I*

⎫⎪⎭

⎧⎩

*A I*

⎫⎭

- Let
*B*=*A**T**A*. Show that*b**ij*=**a***Ti*

**a***j*.

- Let

*A *=

⎧⎪⎩

1 1

2 −1

⎫⎪⎭

and *B *=

⎧⎪⎩

2 1

1 3

⎫⎪⎭

**(a) **Calculate *A***b**1 and *A***b**2.

**(b) **Calculate _**a**1*B *and _**a**2*B*.

**(c) **Multiply *AB *and verify that its column vectors

are the vectors in part (a) and its row vectors

are the vectors in part (b).

- Let

*I *=

⎧⎪⎩

1 0

0 1

⎫⎪⎭

, *E *=

⎧⎪⎩

0 1

1 0

⎫⎪⎭

, *O *=

⎧⎪⎩

0 0

0 0

⎫⎪⎭

*C *=

⎧⎪⎩

1 0

−1 1

⎫⎪⎭

, *D *=

⎧⎪⎩

2 0

0 2

⎫⎪⎭

and

*B *=

⎧⎪⎩

*B*11 *B*12

*B*21 *B*22

⎫⎪⎭

=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 1 1 1

1 2 1 1

3 1 1 1

3 2 1 2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

Perform each of the following block multiplications.

**(a)**

⎧⎪⎩

*O I*

*I O*

⎫⎪⎭

⎧⎪⎩

*B*11 *B*12

*B*21 *B*22

⎫⎪⎭

**(b)**

⎧⎪⎩

*C O*

*O C*

⎫⎪⎭

⎧⎪⎩

*B*11 *B*12

*B*21 *B*22

⎫⎪⎭

**(c)**

⎧⎪⎩

*D O*

*O I*

⎫⎪⎭

⎧⎪⎩

*B*11 *B*12

*B*21 *B*22

⎫⎪⎭

- Perform each of the following block multiplications:

**(a)**

⎧⎪⎩

1 1 1 −1

2 1 2 −1

⎫⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

4 −2 1

2 3 1

1 1 2

1 2 3

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(b)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

4 −2

2 3

1 1

1 2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎩

1 1 1 −1

2 1 2 −1

⎫⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

3

5

−4

5 0 0

4

5

3

5 0 0

0 0 1 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

3

5

4

5 0

−4

5

3

5 0

0 0 1

0 0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

0 0 1 0 0

0 1 0 0 0

1 0 0 0 0

0 0 0 0 1

0 0 0 1 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 −1

2 −2

3 −3

4 −4

5 −5

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

- Given

*X *=

⎧⎪⎩

2 1 5

4 2 3

⎫⎪⎭

, *Y *=

⎧⎪⎩

1 2 4

2 3 1

⎫⎪⎭

**(a) **Compute the outer product expansion of *XY**T *.

**(b) **Compute the outer product expansion of *YX**T *.

How is the outer product expansion of *YX**T *related

to the outer product expansion of *XY**T*?

- Let

*A *=

⎧⎪⎩

*A*11 *A*12

*A*21 *A*22

⎫⎪⎭

and *A**T *=

⎧⎪⎪⎪⎪⎪⎩

*A**T*

11 *A**T*

21

*A**T*

12 *A**T*

22

⎫⎪⎪⎪⎪⎪⎭

Is it possible to perform the block multiplications

of *AA**T *and *A**T**A*? Explain.

- Let
*A*be an*m*×*n*matrix,*X*an*n*×*r*matrix, and*B*

an *m *× *r *matrix. Show that

*AX *= *B*

if and only if

*A***x***j *= **b***j*, *j *= 1, *. . . *, *r*

- Let
*A*be an*n*×*n*matrix and let*D*be an*n*×*n*

diagonal matrix.

**(a) **Show that *D *= (*d*11**e**1, *d*22**e**2, *. . . *, *d**nn***e***n*).

**(b) **Show that *AD *= (*d*11**a**1, *d*22**a**2, *. . . *, *d**nn***a***n*).

- Let
*U*be an*m*×*m*matrix, let*V*be an*n*×*n*matrix,

and let

*_ *=

⎧⎪⎩

*_*1

*O*

⎫⎪⎭

where *_*1 is an *n *× *n *diagonal matrix with diagonal

entries *σ*1, *σ*2, *. . . *, *σ**n *and *O *is the (*m *− *n*) × *n *zero

matrix.

**(a) **Show that if *U *= (*U*1,*U*2), where *U*1 has *n*

columns, then

*U**_ *= *U*1*_*1

**(b) **Show that if *A *= *U**_**V**T *, then *A *can be expressed

as an outer product expansion of the

form

*A *= *σ*1**u**1**v***T*1

+ *σ*2**u**2**v***T*2

+· · ·+*σ**n***u***n***v***Tn*

- Let

*A *=

⎧⎪⎩

*A*11 *A*12

*O A*22

⎫⎪⎭

where all four blocks are *n *× *n *matrices.

**(a) **If *A*11 and *A*22 are nonsingular, show that *A*

must also be nonsingular and that *A*−1 must be

of the form ⎧⎪⎪⎪⎪⎩

*A*−1

11 *C*

*O A*−1

22

⎫⎪⎪⎪⎪⎭

**(b) **Determine *C*.

- Let
*A*and*B*be*n*×*n*matrices and let*M*be a block

matrix of the form

*M *=

⎧⎪⎩

*A O*

*O B*

⎫⎪⎭

Use condition (b) of Theorem 1.5.2 to show that if

either *A *or *B *is singular, then *M *must be singular.

- Let

*A *=

⎧⎪⎩

*O I*

*B O*

⎫⎪⎭

where all four submatrices are *k *×*k*. Determine *A*2

and *A*4.

- Let
*I*denote the*n*×*n*identity matrix. Find a block

form for the inverse of each of the following 2*n*×2*n*

matrices.

**(a)**

⎧⎪⎩

*O I*

*I O*

⎫⎪⎭

**(b)**

⎧⎪⎩

*I O*

*B I*

⎫⎪⎭

- Let
*O*be the*k*×*k*matrix whose entries are all 0,*I*

be the *k *× *k *identity matrix, and *B *be a *k *× *k *matrix

with the property that *B*2 = *O*. If

*A *=

⎧⎪⎩

*O I*

*I B*

⎫⎪⎭

determine the block form of *A*−1 + *A*2 + *A*3.

- Let
*A*and*B*be*n*×*n*matrices and define 2*n*× 2*n*

matrices *S *and *M *by

*S *=

⎧⎪⎩

*I A*

*O I*

⎫⎪⎭

, *M *=

⎧⎪⎩

*AB O*

*B O*

⎫⎪⎭

Determine the block form of *S*−1 and use it to

compute the block form of the product *S*−1*MS*.

- Let

*A *=

⎧⎪⎩

*A*11 *A*12

*A*21 *A*22

⎫⎪⎭

where *A*11 is a *k *× *k *nonsingular matrix. Show that

*A *can be factored into a product

⎧⎪⎩

*I O*

*B I*

⎫⎪⎭

⎧⎪⎩

*A*11 *A*12

*O C*

⎫⎪⎭

where

*B *= *A*21*A*−1

11 and *C *= *A*22 − *A*21*A*−1

11 *A*12

(Note that this problem gives a block matrix version

of the factorization in Exercise 18 of Section

1.3.)

- Let
*A*,*B*,*L*,*M*,*S*, and*T*be*n*×*n*matrices with

*A*, *B*, and *M *nonsingular and *L*, *S*, and *T *singular.

Determine whether it is possible to find matrices *X*

and *Y *such that

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*O I O O O O*

*O O I O O O*

*O O O I O O*

*O O O O I O*

*O O O O O X*

*Y O O O O O*

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*M*

*A*

*T*

*L*

*A*

*B*

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪

⎪⎭

=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*A*

*T*

*L*

*A*

*S*

*T*

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪

⎪⎪⎪⎭

If so, show how; if not, explain why.

- Let
*A*be an*n*×*n*matrix and**x**∈ R*n*.

**(a) **A scalar *c *can also be considered as a 1 × 1

matrix *C *= (*c*), and a vector **b **∈ R*n *can be

considered as an *n *× 1 matrix *B*. Although the

matrix multiplication *CB *is not defined, show

that the matrix product *BC *is equal to *c***b**, the

scalar multiplication of *c *times **b**.

**(b) **Partition *A *into columns and **x **into rows and

perform the block multiplication of *A *times **x**.

**(c) **Show that

*A***x **= *x*1**a**1 + *x*2**a**2 +· · ·+*x**n***a***n*

- If
*A*is an*n*×*n*matrix with the property that*A***x**=**0**

for all **x **∈ R*n*, show that *A *= *O*. *Hint*: Let **x **= **e***j*

for *j *= 1, *. . . *, *n*.

- Let
*B*and*C*be*n*×*n*matrices with the property that

*B***x **= *C***x **for all **x **∈ R*n*. Show that *B *= *C*.

- Consider a system of the form

⎧⎪⎪⎪⎪⎩

*A ***a**

**c***T **β*

⎫⎪⎪⎪⎪⎭

⎧⎪⎩

**x**

*x**n*+1

⎫⎪⎭

=

⎧⎪⎩

**b**

*b**n*+1

⎫⎪⎭

where *A *is a nonsingular *n*×*n *matrix and **a**, **b**, and

**c **are vectors in R*n*.

**(a) **Multiply both sides of the system by

⎧⎪⎪⎪⎪⎩

*A*−1 **0**

−**c***T**A*−1 1

⎫⎪⎪⎪⎪⎭

to obtain an equivalent triangular system.

**(b) **Set **y **= *A*−1**a **and **z **= *A*−1**b**. Show that if

*β *− **c***T***y **_= 0, then the solution of the system

can be determined by letting

*x**n*+1 = *b**n*+1 − **c***T***z**

*β *− **c***T***y**

and then setting

**x **= **z **− *x**n*+1**y**

**CHAPTER TEST A ****True or False**

This chapter test consists of true-or-false questions. In

each case, answer *true *if the statement is always true

and *false *otherwise. In the case of a true statement,

explain or prove your answer. In the case of a false

statement, give an example to show that the statement

is not always true. For example, consider the following

statements about *n *× *n *matrices *A *and *B*:

**(i) ***A *+ *B *= *B *+ *A*

**(ii) ***AB *= *BA*

Statement **(i) **is always *true*. Explanation: The (*i*, *j*)

entry of *A *+ *B *is *a**ij *+ *b**ij *and the (*i*, *j*) entry of *B *+ *A*

is *b**ij *+ *a**ij*. Since *a**ij *+ *b**ij *= *b**ij *+ *a**ij *for each *i *and *j*, it

follows that *A *+ *B *= *B *+ *A*.

The answer to statement **(ii) **is *false*. Although the

statement may be true in some cases, it is not always

true. To show this, we need only exhibit one instance in

which equality fails to hold. For example, if

*A *=

⎧⎪⎩

1 2

3 1

⎫⎪⎭

and *B *=

⎧⎪⎩

2 3

1 1

⎫⎪⎭

then

*AB *=

⎧⎪⎩

4 5

7 10

⎫⎪⎭

and *BA *=

⎧⎪⎩

11 7

4 3

⎫⎪⎭

This proves that statement **(ii) **is false.

- If the row reduced echelon form of
*A*involves

free variables, then the system *A***x **= **b **will have

infinitely many solutions.

- Every homogeneous linear system is consistent.
- An
*n*×*n*matrix*A*is nonsingular if and only if the

reduced row echelon form of *A *is *I *(the identity

matrix).

- If
*A*is nonsingular, then*A*can be factored into a

product of elementary matrices.

- If
*A*and*B*are nonsingular*n*×*n*matrices, then*A*+

*B *is also nonsingular and (*A *+ *B*)−1 = *A*−1 + *B*−1.

- If
*A*=*A*−1, then*A*must be equal to either*I*or −*I*. - If
*A*and*B*are*n*×*n*matrices, then (*A*−*B*)2 =

*A*2 − 2*AB *+ *B*2.

- If
*AB*=*AC*and*A*_=*O*(the zero matrix), then

*B *= *C*.

- If
*AB*=*O*, then*BA*=*O*. - If
*A*is a 3 ×3 matrix and**a**1 + 2**a**2 −**a**3 =**0**, then

*A *must be singular.

- If
*A*is a 4 ×3 matrix and**b**=**a**1 +**a**3, then the

system *A***x **= **b **must be consistent.

- Let
*A*be a 4 ×3 matrix with**a**2 =**a**3. If**b**=

**a**1 + **a**2 + **a**3, then the system *A***x **= **b **will have

infinitely many solutions.

- If
*E*is an elementary matrix, then*E**T*is also an

elementary matrix.

- The product of two elementary matrices is an

elementary matrix.

- If
**x**and**y**are nonzero vectors in R*n*and*A*=**xy***T*,

then the row echelon form of *A *will have exactly

one nonzero row.

**CHAPTER TEST B**

- Find all solutions of the linear system

*x*1 − *x*2 + 3*x*3 + 2*x*4 = 1

−*x*1 + *x*2 − 2*x*3 + *x*4 = −2

2*x*1 − 2*x*2 + 7*x*3 + 7*x*4 = 1

**(a)**A linear equation in two unknowns corresponds

to a line in the plane. Give a similar geometric

interpretation of a linear equation in three

unknowns.

**(b) **Given a linear system consisting of two equations

in three unknowns, what is the possible

number of solutions? Give a geometric explanation

of your answer.

**(c) **Given a homogeneous linear system consisting

of two equations in three unknowns,

how many solutions will it have?

Explain.

- Let
*A***x**=**b**be a system of*n*linear equations in

*n *unknowns and suppose that **x**1 and **x**2 are both

solutions and **x**1 _= **x**2.

**(a) **How many solutions will the system have? Explain.

**(b) **Is the matrix *A *nonsingular? Explain.

- Let
*A*be a matrix of the form

*A *=

⎧⎪⎩

*α β*

2*α *2*β*

⎫⎪⎭

where *α *and *β *are fixed scalars not both equal to 0.

**(a) **Explain why the system

*A***x **=

⎧⎪⎩

3

1

⎫⎪⎭

must be inconsistent.

**(b) **How can one choose a nonzero vector **b **so that

the system *A***x **= **b **will be consistent? Explain.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

2 1 3

4 2 7

1 3 5

⎫⎪⎪⎪⎪⎪⎭

, *B *=

⎧⎪⎪⎪⎪⎪⎩

2 1 3

1 3 5

4 2 7

⎫⎪⎪⎪⎪⎪⎭

,

*C *=

⎧⎪⎪⎪⎪⎪⎩

0 1 3

0 2 7

−5 3 5

⎫⎪

⎪⎪⎪⎪⎭

**(a) **Find an elementary matrix *E *such that

*EA *= *B*.

**(b) **Find an elementary matrix *F *such that

*AF *= *C*.

- Let
*A*be a 3 × 3 matrix and let

**b **= 3**a**1 + **a**2 + 4**a**3

Will the system *A***x **= **b **be consistent? Explain.

- Let
*A*be a 3 × 3 matrix and suppose that

**a**1 − 3**a**2 + 2**a**3 = **0 **(the zero vector)

Is *A *nonsingular? Explain.

- Given the vector

**x**0 =

⎧⎪⎩

1

1

⎫⎪⎭

is it possible to find 2 × 2 matrices *A *and *B *so that

*A *_= *B *and *A***x**0 = *B***x**0? Explain.

- Let
*A*and*B*be symmetric*n*×*n*matrices and let

*C *= *AB*. Is *C *symmetric? Explain.

- Let
*E*and*F*be*n*×*n*elementary matrices and let

*C *= *EF*. Is *C *nonsingular? Explain.

- Given

*A *=

⎧⎪⎪⎪⎪⎪⎩

*I O O*

*O I O*

*O B I*

⎫⎪⎪⎪⎪⎪⎭

where all of the submatrices are *n *× *n*, determine

the block form of *A*−1.

- Let
*A*and*B*be 10×10 matrices that are partitioned

into submatrices as follows

*A *=

⎧⎪⎩

*A*11 *A*12

*A*21 *A*22

⎫⎪⎭

, *B *=

⎧⎪⎩

*B*11 *B*12

*B*21 *B*22

⎫⎪⎭

**(a) **If *A*11 is a 6 × 5 matrix, and *B*11 is a *k *× *r *matrix,

what conditions, if any, must *k *and *r *satisfy

in order to make the block multiplication of *A*

times *B *possible?

**(b) **Assuming that the block multiplication is possible,

how would the (2, 2) block of the product

be determined?

**Chapter 2 ****Determinants**

SECTION 2.1 EXERCISES

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

3 2 4

1 −2 3

2 3 2

⎫⎪⎪⎪⎪⎪⎭

**(a) **Find the values of det(*M*21), det(*M*22), and

det(*M*23).

**(b) **Find the values of *A*21, *A*22, and *A*23.

**(c) **Use your answers from part (b) to compute

det(*A*).

- Use determinants to determine whether the following

2 × 2 matrices are nonsingular:

**(a)**

⎧⎪⎩

3 5

2 4

⎫⎪⎭

**(b)**

⎧⎪⎩

3 6

2 4

⎫⎪⎭

**(c)**

⎧⎪⎩

3 −6

2 4

⎫⎪⎭

- Evaluate the following determinants:

**(a)**

____

3 5

−2 −3

____

**(b)**

____

5 −2

−8 4

____

**(c)**

______

3 1 2

2 4 5

2 4 5

______

**(d)**

______

4 3 0

3 1 2

5 −1 −4

____

__

**(e)**

______

1 3 2

4 1 −2

2 1 3

______

**(f)**

______

2 −1 2

1 3 2

5 1 6

______

**(g)**

_______

2 0 0 1

0 1 0 0

1 6 2 0

1 1 −2 3

_______

**(h)**

_______

2 1 2 1

3 0 1 1

−1 2 −2 1

−3 2 3 1

_______

- Evaluate the following determinants by inspection:

**(a)**

____

3 5

2 4

____

**(b)**

______

2 0 0

4 1 0

7 3 −2

______

**(c)**

______

3 0 0

2 1 1

1 2 2

______

**(d)**

_______

4 0 2 1

5 0 4 2

2 0 3 4

1 0 2 3

_______

- Evaluate the following determinant. Write your

answer as a polynomial in *x*:

______

*a *−*x b c*

1 −*x *0

0 1 −*x*

______

- Find all values of
*λ*for which the following determinant

will equal 0:

____

2 − *λ *4

3 3− *λ*

____

- Let
*A*be a 3 × 3 matrix with*a*11 = 0 and*a*21 _= 0.

Show that *A *is row equivalent to *I *if and only if

− *a*12*a*21*a*33 + *a*12*a*31*a*23

+ *a*13*a*21*a*32 − *a*13*a*31*a*22 _= 0

- Write out the details of the proof of Theorem 2.1.3.
- Prove that if a row or a column of an
*n*×*n*matrix

*A *consists entirely of zeros, then det(*A*) = 0.

- Use mathematical induction to prove that if
*A*is an

(*n *+ 1) × (*n *+ 1) matrix with two identical rows,

then det(*A*) = 0.

- Let
*A*and*B*be 2 × 2 matrices.

**(a) **Does det(*A *+ *B*) = det(*A*) + det(*B*)?

**(b) **Does det(*AB*) = det(*A*) det(*B*)?

**(c) **Does det(*AB*) = det(*BA*)?

Justify your answers.

- Let
*A*and*B*be 2 × 2 matrices and let

*C *=

⎧⎪⎩

*a*11 *a*12

*b*21 *b*22

⎫⎪⎭

, *D *=

⎧⎪⎩

*b*11 *b*12

*a*21 *a*22

⎫⎪⎭

,

*E *=

⎧⎪⎩0

*α*

*β *0

⎫⎪⎭

**(a) **Show that det(*A *+ *B*) = det(*A*) + det(*B*) +

det(*C*) + det(*D*).

**(b) **Show that if *B *= *EA*, then det(*A *+ *B*) =

det(*A*) + det(*B*).

- Let
*A*be a symmetric tridiagonal matrix (i.e.,*A*is

symmetric and *a**ij *= 0 whenever |*i *− *j*| *> *1). Let

*B *be the matrix formed from *A *by deleting the first

two rows and columns. Show that

det(*A*) = *a*11 det(*M*11) − *a*2

12 det(*B*)

SECTION 2.2 EXERCISES

- Evaluate each of the following determinants by

inspection.

**(a)**

______

0 0 3

0 4 1

2 3 1

______

**(b)**

_______

1 1 1 3

0 3 1 1

0 0 2 2

−1 −1 −1 2

_______

**(c)**

_______

0 0 0 1

1 0 0 0

0 1 0 0

0 0 1 0

_______

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

0 1 2 3

1 1 1 1

−2 −2 3 3

1 2 −2 −3

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Use the elimination method to evaluate det(*A*).

**(b) **Use the value of det(*A*) to evaluate

_______

0 1 2 3

−2 −2 3 3

1 2 −2 −3

1 1 1 1

_______

+

_______

0 1 2 3

1 1 1 1

−1 −1 4 4

2 3 −1 −2

_______

- For each of the following, compute the determinant

and state whether the matrix is singular or

nonsingular:

**(a)**

⎧⎪⎩

3 1

6 2

⎫⎪⎭

**(b)**

⎧⎪⎩

3 1

4 2

⎫⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

3 3 1

0 1 2

0 2 3

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎩ 2

1

1

4 3 5

2 1 2

⎫⎪⎪⎪⎪⎪⎭

**(e)**

⎧⎪⎪⎪⎪⎪

⎩

2 −1 3

−1 2 −2

1 4 0

⎫⎪⎪⎪⎪⎪⎭

**(f)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩ 1

1

1

1

2 −1 3 2

0 1 2 1

0 0 7 3

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

- Find all possible choices of
*c*that would make the

following matrix singular:

⎧⎪⎪⎪⎪⎪⎩

1 1 1

1 9 *c*

1 *c *3

⎫⎪⎪⎪⎪⎪⎭

- Let
*A*be an*n*×*n*matrix and*α*a scalar. Show that

det(*α**A*) = *α**n *det(*A*)

- Let
*A*be a nonsingular matrix. Show that

det(*A*−1) = 1

det(*A*)

- Let
*A*and*B*be 3 × 3 matrices with det(*A*) = 4 and

det(*B*) = 5. Find the value of

**(a) **det(*AB*) **(b) **det(3*A*)

**(c) **det(2*AB*) **(d) **det(*A*−1*B*)

- Show that if
*E*is an elementary matrix, then*E**T*is

an elementary matrix of the same type as *E*.

- Let
*E*1,*E*2, and*E*3 be 3 × 3 elementary matrices

of types I, II, and III, respectively, and let *A *be a

3 × 3 matrix with det(*A*) = 6. Assume, additionally,

that *E*2 was formed from *I *by multiplying its

second row by 3. Find the values of each of the

following:

**(a) **det(*E*1*A*) **(b) **det(*E*2*A*)

**(c) **det(*E*3*A*) **(d) **det(*AE*1)

**(e) **det(*E*2

1) **(f) **det(*E*1*E*2*E*3)

- Let
*A*and*B*be row equivalent matrices, and suppose

that *B *can be obtained from *A *by using only

row operations I and III. How do the values of

det(*A*) and det(*B*) compare? How will the values

compare if *B *can be obtained from *A *using only

row operation III? Explain your answers.

- Let
*A*be an*n*×*n*matrix. Is it possible for*A*2+*I*=

*O *in the case where *n *is odd? Answer the same

question in the case where *n *is even.

- Consider the 3 × 3 Vandermonde matrix

*V *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 *x*1 *x*2

1

1 *x*2 *x*2

2

1 *x*3 *x*2

3

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Show that det(*V*) = (*x*2 −*x*1)(*x*3 −*x*1)(*x*3 −*x*2).

*Hint*: Make use of row operation III.

**(b) **What conditions must the scalars *x*1, *x*2, and *x*3

satisfy in order for *V *to be nonsingular?

- Suppose that a 3×3 matrix
*A*factors into a product

⎧⎪⎪⎪⎪⎪⎩

1 0 0

*l*21 1 0

*l*31 *l*32 1

⎫⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎩

*u*11 *u*12 *u*13

0 *u*22 *u*23

0 0 *u*33

⎫⎪⎪⎪⎪⎪⎭

Determine the value of det(*A*).

- Let
*A*and*B*be*n*×*n*matrices. Prove that the product

*AB *is nonsingular if and only if *A *and *B *are both

nonsingular.

- Let
*A*and*B*be*n*×*n*matrices. Prove that if*AB*=*I*,

then *BA *= *I*. What is the significance of this

result in terms of the definition of a nonsingular

matrix?

- A matrix
*A*is said to be*skew symmetric*if

*A**T *= −*A*. For example,

*A *=

⎧⎪⎩

0 1

−1 0

⎫⎪⎭

is skew symmetric, since

*A**T *=

⎧⎪⎩

0 −1

1 0

⎫⎪⎭

= −*A*

If *A *is an *n *× *n *skew-symmetric matrix and *n *is

odd, show that *A *must be singular.

- Let
*A*be a nonsingular*n*×*n*matrix with a nonzero

cofactor *A**nn*, and set

*c *= det(*A*)

*A**nn*

Show that if we subtract *c *from *a**nn*, then the

resulting matrix will be singular.

- Let
*A*be a*k*×*k*matrix and let*B*be an

(*n *− *k*) × (*n *− *k*) matrix. Let

*E *=

⎧⎪⎩

*I**k **O*

*O B*

⎫⎪⎭

, *F *=

⎧⎪⎩

*A O*

*O I**n*−*k*

⎫⎪⎭

,

*C *=

⎧⎪⎩

*A O*

*O B*

⎫⎪⎭

where *I**k *and *I**n*−*k *are the *k *× *k *and

(*n *− *k*) × (*n *− *k*) identity matrices.

**(a) **Show that det(*E*) = det(*B*).

**(b) **Show that det(*F*) = det(*A*).

**(c) **Show that det(*C*) = det(*A*) det(*B*).

- Let
*A*and*B*be*k*×*k*matrices and let

*M *=

⎧⎪⎩

*O B*

*A O*

⎫⎪⎭

Show that det(*M*) = (−1)*k *det(*A*) det(*B*).

- Show that evaluating the determinant of an
*n*×*n*

matrix by cofactors involves (*n*! − 1) additions and

_*n*−1

*k*=1

*n*!*/**k*! multiplications.

- Show that the elimination method of computing

the value of the determinant of an *n *× *n *matrix

involves [*n*(*n *− 1)(2*n *− 1)]*/*6 additions and

[(*n *− 1)(*n*2 + *n *+ 3)]*/*3 multiplications and divisions.

*Hint*: At the *i*th step of the reduction process,

it takes *n *− *i *divisions to calculate the multiples

of the *i*th row that are to be subtracted from the

remaining rows below the pivot. We must then calculate

new values for the (*n *− *i*)2 entries in rows

*i *+ 1 through *n *and columns *i *+ 1 through *n*.

SECTION 2.3 EXERCISES

- For each of the following, compute (i) det(
*A*),

(ii) adj *A*, and (iii) *A*−1:

**(a) ***A *=

⎧⎪⎩

1 2

3 −1

⎫⎪⎭

**(b) ***A *=

⎧⎪⎩

3 1

2 4

⎫⎪⎭

**(c) ***A *=

⎧⎪⎪⎪⎪⎪⎩

1 3 1

2 1 1

−2 2 −1

⎫⎪⎪⎪⎪⎪⎭

**(d) ***A *=

⎧⎪⎪⎪⎪⎪⎩

1 1 1

0 1 1

0 0 1

⎫⎪⎪⎪⎪⎪⎭

- Use Cramer’s rule to solve each of the following

systems:

**(a) ***x*1 + 2*x*2 = 3

3*x*1 − *x*2 = 1

**(b) **2*x*1 + 3*x*2 = 2

3*x*1 + 2*x*2 = 5

**(c) **2*x*1 + *x*2 − 3*x*3 = 0

4*x*1 + 5*x*2 + *x*3 = 8

−2*x*1 − *x*2 + 4*x*3 = 2

**(d) ***x*1 + 3*x*2 + *x*3 = 1

2*x*1 + *x*2 + *x*3 = 5

−2*x*1 + 2*x*2 − *x*3 = −8

**(e) ***x*1 + *x*2 = 0

*x*2 + *x*3 − 2*x*4 = 1

*x*1 + 2*x*3 + *x*4 = 0

*x*1 + *x*2 + *x*4 = 0

- Given

*A *=

⎧⎪⎪⎪⎪⎪⎩

1 2 1

0 4 3

1 2 2

⎫⎪⎪⎪⎪⎪⎭

determine the (2, 3) entry of *A*−1 by computing a

quotient of two determinants.

- Let
*A*be the matrix in Exercise 3. Compute the

third column of *A*−1 by using Cramer’s rule to solve

*A***x **= **e**3.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

1 2 3

2 3 4

3 4 5

⎫⎪⎪⎪⎪⎪⎭

**(a) **Compute the determinant of *A*. Is *A *nonsingular?

**(b) **Compute adj *A *and the product *A *adj *A*.

- If
*A*is singular, what can you say about the product

*A *adj *A*?

- Let
*B**j*denote the matrix obtained by replacing the

*j*th column of the identity matrix with a vector

**b **= (*b*1, *. . . *, *b**n*)*T *. Use Cramer’s rule to show that

*b**j *= det(*B**j*) for *j *= 1, *. . . *, *n*

- Let
*A*be a nonsingular*n*×*n*matrix with*n**>*1.

Show that

det(adj *A*) = (det(*A*))*n*−1

- Let
*A*be a 4 × 4 matrix. If

adj *A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

2 0 0 0

0 2 1 0

0 4 3 2

0 −2 −1 2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **calculate the value of det(adj *A*). What should

the value of det(*A*) be? *Hint*: Use the result

from Exercise 8.

**(b) **find *A*.

- Show that if
*A*is nonsingular, then adj*A*is nonsingular

and

(adj *A*)−1 = det(*A*−1)*A *= adj *A*−1

- Show that if
*A*is singular, then adj*A*is also singular. - Show that if det(
*A*) = 1, then

adj(adj *A*) = *A*

- Suppose that
*Q*is a matrix with the property*Q*−1 =

*Q**T *. Show that

*q**ij *= *Q**ij*

det(*Q*)

- In coding a message, a blank space was represented

by 0, an A by 1, a B by 2, a C by 3, and so on. The

message was transformed using the matrix

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

−1 −1 2 0

1 1 −1 0

0 0 −1 1

1 0 0 −1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

and sent as

− 19, 19, 25,−21, 0, 18,−18, 15, 3, 10,

− 8, 3,−2, 20,−7, 12

What was the message?

- Let
**x**,**y**, and**z**be vectors in R3. Show each of the

following:

**(a) x **× **x **=**0 (b) y **× **x **= −(**x **× **y**)

**(c) x **× (**y **+ **z**) = (**x **× **y**) + (**x **× **z**)

**(d) z***T *(**x **× **y**) =

______

*x*1 *x*2 *x*3

*y*1 *y*2 *y*3

*z*1 *z*2 *z*3

______

- Let
**x**and**y**be vectors in R3 and define the skewsymmetric

matrix *A**x *by

*A**x *=

⎧⎪⎪⎪⎪⎪⎩

0 −*x*3 *x*2

*x*3 0 −*x*1

−*x*2 *x*1 0

⎫⎪⎪⎪⎪⎪⎭

**(a) **Show that **x **× **y **= *A**x***y**.

**(b) **Show that **y **× **x **= *A**Tx*

**y**.

Chapter Two Exercises

**MATLAB EXERCISES**

*The first four exercises that follow involve integer*

*matrices and illustrate some of the properties of determinants*

*that were covered in this chapter. The last*

*two exercises illustrate some of the differences that may*

*arise when we work with determinants in floating-point*

*arithmetic.*

*In theory, the value of the determinant should tell*

*us whether the matrix is nonsingular. However, if the*

*matrix is singular and its determinant is computed*

*using finite-precision arithmetic, then, because of roundoff*

*errors, the computed value of the determinant may*

*not equal zero. A computed value near zero does not*

*necessarily mean that the matrix is singular or even*

*close to being singular. Furthermore, a matrix may be*

*nearly singular and have a determinant that is not even*

*close to zero (see Exercise 6).*

- Generate random 5×5 matrices with integer entries

by setting

*A *= **round**(10 ∗ **rand**(5))

and

*B *= **round**(20 ∗ **rand**(5)) – 10

Use MATLAB to compute each of the pairs of

numbers that follow. In each case, check whether

the first number is equal to the second.

**(a) **det(*A*) det(*A**T *)

**(b) **det(*A *+ *B*) det(*A*) + det(*B*)

**(c) **det(*AB*) det(*A*) det(*B*)

**(d) **det(*A**T**B**T *) det(*A**T*) det(*B**T *)

**(e) **det(*A*−1) 1*/ *det(*A*)

**(f) **det(*AB*−1) det(*A*)*/ *det(*B*)

- Are
*n*×*n*magic squares nonsingular? Use the

MATLAB command **det**(**magic**(*n*)) to compute

the determinants of the magic squares matrices in

the cases *n *= 3, 4, *. . . *, 10. What seems to be happening?

Check the cases *n *= 24 and 25 to see if

the pattern still holds.

- Set
*A*=**round**(10 ∗**rand**(6)). In each of the following,

use MATLAB to compute a second matrix

as indicated. State how the second matrix is related

to *A *and compute the determinants of both

matrices. How are the determinants related?

**(a) ***B *= *A*; *B*(2, 🙂 = *A*(1, :); *B*(1, 🙂 = *A*(2, 🙂

**(b) ***C *= *A*; *C*(3, 🙂 = 4 ∗ *A*(3, 🙂

**(c) ***D *= *A*; *D*(5, 🙂 = *A*(5, 🙂 + 2 ∗ *A*(4, 🙂

- We can generate a random 6 ×6 matrix
*A*whose

entries consist entirely of 0’s and 1’s by setting

*A *= **round**(**rand**(6))

**(a) **What percentage of these random 0–1 matrices

are singular? You can estimate the percentage

in MATLAB by setting

**y **= **zeros**(1, 100);

and then generating 100 test matrices and setting

*y*(*j*) = 1 if the *j*th matrix is singular and

0 otherwise. The easy way to do this in MATLAB

is to use a *for loop*. Generate the loop as

follows:

**for ***j *= 1 : 100

*A *= **round**(**rand**(6));

*y*(*j*) = (**det**(*A*) == 0);

**end**

(*Note*: A semicolon at the end of a line suppresses

printout. It is recommended that you

include one at the end of each line of calculation

that occurs inside a for loop.) To determine

how many singular matrices were generated,

use the MATLAB command **sum**(**y**). What

percentage of the matrices generated were

singular?

**(b) **For any positive integer *n*, we can generate

a random 6 ×6 matrix *A *whose entries are

integers from 0 to *n *by setting

*A *= **round**(*n *∗ **rand**(6))

What percentage of random integer matrices

generated in this manner will be singular if

*n *= 3? If *n *= 6? If *n *= 10? We can estimate

the answers to these questions by using MATLAB.

In each case, generate 100 test matrices

and determine how many of the matrices are

singular.

- If a matrix is sensitive to roundoff errors, the computed

value of its determinant may differ drastically

from the exact value. For an example of this, set

*U *= **round**(100 ∗ **rand**(10));

*U *= **triu**(*U*, 1) + 0.1 ∗ **eye**(10)

In theory,

det(*U*) = det(*U**T *) = 10−10

and

det(*UU**T *) = det(*U*) det(*U**T *) = 10−20

Compute det(*U*), det(*U* ), and det(*U *∗ *U* ) with

MATLAB. Do the computed values match the

theoretical values?

- Use MATLAB to construct a matrix
*A*by setting

*A *= **vander**(1 : 6); *A *= *A*−**diag**(**sum**(*A* ))

**(a) **By construction, the entries in each row of *A*

should all add up to zero. To check this, set

**x **= **ones**(6, 1) and use MATLAB to compute

the product *A***x**. The matrix *A *should be

singular. Why? Explain. Use the MATLAB

functions **det **and **inv **to compute the values

of det(*A*) and *A*−1. Which MATLAB function

is a more reliable indicator of singularity?

**(b) **Use MATLAB to compute det(*A**T *). Are the

computed values of det(*A*) and det(*A**T *) equal?

Another way to check if a matrix is singular is

to compute its reduced row echelon form. Use

MATLAB to compute the reduced row echelon

forms of *A *and *A**T *.

**(c) **To see what is going wrong, it helps to know

how MATLAB computes determinants. The

MATLAB routine for determinants first computes

a form of the LU factorization of the

matrix. The determinant of the matrix *L *is

±1, depending on whether an even or odd

number of row interchanges were used in the

computation. The computed value of the determinant

of *A *is the product of the diagonal

entries of *U *and det(*L*) = ±1. To see what is

happening with our original matrix, use the following

commands to compute and display the

factor *U*.

**format short e**

**[ ***L*,*U ***] **= **lu**(*A*); *U*

In exact arithmetic, *U *should be singular.

Is the computed matrix *U *singular? If

not, what goes wrong? Use the following commands

to see the rest of the computation of

*d *= det(*A*):

**format short**

*d *= **prod**(**diag**(*U*))

**CHAPTER TEST A ****True or False**

For each statement that follows, answer *true *if the statement

is always true and *false *otherwise. In the case of a

true statement, explain or prove your answer. In the case

of a false statement, give an example to show that the

statement is not always true. Assume that all the given

matrices are *n *× *n*.

- det(
*AB*) = det(*BA*) - det(
*A*+*B*) = det(*A*) + det(*B*) - det(
*cA*) =*c*det(*A*) - det((
*AB*)*T*) = det(*A*) det(*B*) - det(
*A*) = det(*B*) implies*A*=*B*. - det(
*A**k*) = det(*A*)*k* - A triangular matrix is nonsingular if and only if its

diagonal entries are all nonzero.

- If
**x**is a nonzero vector in R*n*and*A***x**=**0**, then

det(*A*) = 0.

- If
*A*and*B*are row equivalent matrices, then their

determinants are equal.

- If
*A*_=*O*, but*A**k*=*O*(where*O*denotes the zero

matrix) for some positive integer *k*, then *A *must be

singular.

**CHAPTER TEST B**

- Let
*A*and*B*be 3 × 3 matrices with det(*A*) = 4

and det(*B*) = 6, and let *E *be an elementary matrix

of type I. Determine the value of each of the

following:

**(a) **det( 1

2*A*) **(b) **det(*B*−1*A**T *) **(c) **det(*EA*2)

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

*x *1 1

1 *x *−1

−1 −1 *x*

⎫⎪⎪⎪⎪⎪⎭

**(a) **Compute the value of det(*A*) (Your answer

should be a function of *x*.)

**(b) **For what values of *x *will the matrix be singular?

Explain.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩ 1

1

1

1

1 2 3 4

1 3 6 10

1 4 10 20

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Compute the LU factorization of *A*.

**(b) **Use the LU factorization to determine the value

of det(*A*).

- If
*A*is a nonsingular*n*×*n*matrix, show that*A**T**A*

is nonsingular and det(*A**T**A*) *> *0.

- Let
*A*be an*n*×*n*matrix. Show that if*B*=*S*−1*AS*

for some nonsingular matrix *S*, then det(*B*) =

det(*A*).

- Let
*A*and*B*be*n*×*n*matrices and let*C*=*AB*.

Use determinants to show that if either *A *or *B *is

singular, then *C *must be singular.

- Let
*A*be an*n*×*n*matrix and let*λ*be a scalar. Show

that

det(*A *− *λ**I*) = 0

if and only if

*A***x **= *λ***x **for some **x **_= **0**

- Let
**x**and**y**be vectors in R*n*,*n**>*1. Show that if

*A *= **xy***T *, then det(*A*) = 0.

- Let
**x**and**y**be distinct vectors in R*n*(i.e.,**x**_=**y**),

and let *A *be an *n *× *n *matrix with the property that

*A***x **= *A***y**. Show that det(*A*) = 0.

- Let
*A*be a matrix with integer entries. If

| det(*A*)| = 1, then what can you conclude about

the nature of the entries of *A*−1? Explain.

** **

**Chapter 3 ****Vector Spaces**

SECTION 3.1 EXERCISES

- Consider the vectors
**x**1 = (8, 6)*T*and

**x**2 = (4, −1)*T *in R2.

**(a) **Determine the length of each vector.

**(b) **Let **x**3 = **x**1 + **x**2. Determine the length of **x**3.

How does its length compare with the sum of

the lengths of **x**1 and **x**2?

**(c) **Draw a graph illustrating how **x**3 can be constructed

geometrically using **x**1 and **x**2. Use

this graph to give a geometrical interpretation

of your answer to the question in part (b).

- Repeat Exercise 1 for the vectors
**x**1 = (2, 1)*T*and

**x**2 = (6, 3)*T *.

- Let
*C*be the set of complex numbers. Define

addition on *C *by

(*a *+ *bi*) + (*c *+ *di*) = (*a *+ *c*) + (*b *+ *d*)*i*

and define scalar multiplication by

*α*(*a *+ *bi*) = *α**a *+ *α**bi*

for all real numbers *α*. Show that *C *is a vector space

with these operations.

- Show that R
*m*×*n*, together with the usual addition

and scalar multiplication of matrices, satisfies the

eight axioms of a vector space.

- Show that
*C*[*a*,*b*], together with the usual scalar

multiplication and addition of functions, satisfies

the eight axioms of a vector space.

- Let
*P*be the set of all polynomials. Show that*P*,

together with the usual addition and scalar multiplication

of functions, forms a vector space.

- Show that the element
**0**in a vector space is

unique.

- Let
**x**,**y**, and**z**be vectors in a vector space*V*. Prove

that if

**x **+ **y **= **x **+ **z**

then **y **= **z**.

- Let
*V*be a vector space and let**x**∈*V*. Show that

**(a) ***β***0 **= **0 **for each scalar *β*.

**(b) **if *α***x **= **0**, then either *α *= 0 or **x **= **0**.

- Let
*S*be the set of all ordered pairs of real numbers.

Define scalar multiplication and addition on *S *by

*α*(*x*1, *x*2) = (*α**x*1, *α**x*2)

(*x*1, *x*2) ⊕ (*y*1, *y*2) = (*x*1 + *y*1, 0)

We use the symbol ⊕ to denote the addition operation

for this system in order to avoid confusion with

the usual addition **x **+ **y **of row vectors. Show that

*S*, together with the ordinary scalar multiplication

and the addition operation ⊕, is not a vector space.

Which of the eight axioms fail to hold?

- Let
*V*be the set of all ordered pairs of real numbers

with addition defined by

(*x*1, *x*2) + (*y*1, *y*2) = (*x*1 + *y*1, *x*2 + *y*2)

and scalar multiplication defined by

*α *◦ (*x*1, *x*2) = (*α**x*1, *x*2)

Scalar multiplication for this system is defined in

an unusual way, and consequently we use the symbol

◦ to avoid confusion with the ordinary scalar

multiplication of row vectors. Is *V *a vector space

with these operations? Justify your answer.

- Let
*R*+ denote the set of positive real numbers.

Define the operation of scalar multiplication, denoted

◦, by

*α *◦ *x *= *x**α*

for each *x *∈ *R*+ and for any real number *α*. Define

the operation of addition, denoted ⊕, by

*x *⊕ *y *= *x *· *y *for all *x*, *y *∈ *R*+

Thus, for this system, the scalar product of −3

times 1

2 is given by

−3 ◦ 1

2

=

_

1

2

−3

= 8

and the sum of 2 and 5 is given by

2 ⊕ 5 = 2 · 5 = 10

Is *R*+ a vector space with these operations? Prove

your answer.

- Let
*R*denote the set of real numbers. Define scalar

multiplication by

*α**x *= *α *· *x *(the usual multiplication of

real numbers)

and define addition, denoted ⊕, by

*x *⊕ *y *= max(*x*, *y*) (the maximum of the two

numbers)

Is *R *a vector space with these operations? Prove

your answer.

- Let
*Z*denote the set of all integers with addition

defined in the usual way and define scalar

multiplication, denoted ◦, by

*α *◦ *k *= [[*α*]] · *k *for all *k *∈ *Z*

where [[*α*]] denotes the greatest integer less than or

equal to *α*. For example,

2.25 ◦ 4 = [[2.25]] · 4 = 2 · 4 = 8

Show that *Z*, together with these operations, is not

a vector space. Which axioms fail to hold?

- Let
*S*denote the set of all infinite sequences of real

numbers with scalar multiplication and addition

defined by

*α*{*a**n*} = {*α**a**n*}

{*a**n*} + {*b**n*} = {*a**n *+ *b**n*}

Show that *S *is a vector space.

- We can define a one-to-one correspondence

between the elements of *P**n *and R*n *by

*p*(*x*) = *a*1 + *a*2*x*+· · ·+*a**n**x**n*−1

↔ (*a*1, *. . . *, *a**n*)*T *= **a**

Show that if *p *↔ **a **and *q *↔ **b**, then

**(a) ***α**p *↔ *α***a **for any scalar *α*.

**(b) ***p *+ *q *↔ **a **+ **b**.

[In general, two vector spaces are said to be *isomorphic*

if their elements can be put into a one-toone

correspondence that is preserved under scalar

multiplication and addition as in (a) and (b).]

SECTION 3.2 EXERCISES

- Determine whether the following sets form subspaces

of R2:

**(a) **{(*x*1, *x*2)*T *| *x*1 + *x*2 = 0}

**(b) **{(*x*1, *x*2)*T *| *x*1*x*2 = 0}

**(c) **{(*x*1, *x*2)*T *| *x*1 = 3*x*2}

**(d) **{(*x*1, *x*2)*T *| |*x*1| = |*x*2|}

**(e) **{(*x*1, *x*2)*T *| *x*2

1

= *x*2

2

}

- Determine whether the following sets form subspaces

of R3:

**(a) **{(*x*1, *x*2, *x*3)*T *| *x*1 + *x*3 = 1}

**(b) **{(*x*1, *x*2, *x*3)*T *| *x*1 = *x*2 = *x*3}

**(c) **{(*x*1, *x*2, *x*3)*T *| *x*3 = *x*1 + *x*2}

**(d) **{(*x*1, *x*2, *x*3)*T *| *x*3 = *x*1 or *x*3 = *x*2}

- Determine whether the following are subspaces of

R2×2:

**(a) **The set of all 2 × 2 diagonal matrices

**(b) **The set of all 2 × 2 triangular matrices

**(c) **The set of all 2 × 2 lower triangular matrices

**(d) **The set of all 2 × 2 matrices *A *such that

*a*12 = 1

**(e) **The set of all 2 × 2 matrices *B *such that

*b*11 = 0

**(f) **The set of all symmetric 2 × 2 matrices

**(g) **The set of all singular 2 × 2 matrices

- Determine the null space of each of the following

matrices:

**(a)**

⎧⎪⎩2

1

3 2

⎫⎪⎭

**(b)**

⎧⎪⎩

1 2 −3 −1

−2 −4 6 3

⎫⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

1 3 −4

2 −1 −1

−1 −3 4

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎩

1 1 −1 2

2 2 −3 1

−1 −1 0 −5

⎫⎪⎪⎪⎪⎪⎭

- Determine whether the following are subspaces of

*P*4 (be careful!):

**(a) **The set of polynomials in *P*4 of even degree

**(b) **The set of all polynomials of degree 3

**(c) **The set of all polynomials *p*(*x*) in *P*4 such that

*p*(0) = 0

**(d) **The set of all polynomials in *P*4 having at least

one real root

- Determine whether the following are subspaces of

*C*[−1, 1]:

**(a) **The set of functions *f *in *C*[−1, 1] such that

*f *(−1) = *f *(1)

**(b) **The set of odd functions in *C*[−1, 1]

**(c) **The set of continuous nondecreasing functions

on [−1, 1]

**(d) **The set of functions *f *in *C*[−1, 1] such that

*f *(−1) = 0 and *f *(1) = 0

**(e) **The set of functions *f *in *C*[−1, 1] such that

*f *(−1) = 0 or *f *(1) = 0

- Show that
*C**n*[*a*,*b*] is a subspace of*C*[*a*,*b*]. - Let
*A*be a fixed vector in R*n*×*n*and let*S*be the set

of all matrices that commute with *A*, that is,

*S *= {*B *| *AB *= *BA*}

Show that *S *is a subspace of R*n*×*n*.

- In each of the following determine the subspace of

R2×2 consisting of all matrices that commute with

the given matrix:

**(a)**

⎧⎪⎩

1 0

0 −1

⎫⎪⎭

**(b)**

⎧⎪⎩

0 0

1 0

⎫⎪⎭

**(c)**

⎧⎪⎩

1 1

0 1

⎫⎪⎭

**(d)**

⎧⎪⎩

1 1

1 1

⎫⎪⎭

- Let
*A*be a particular vector in R2×2. Determine

whether the following are subspaces of R2×2:

**(a) ***S*1 = {*B *∈ R2×2 | *BA *= *O*}

**(b) ***S*2 = {*B *∈ R2×2 | *AB *_= *BA*}

**(c) ***S*3 = {*B *∈ R2×2 | *AB *+ *B *= *O*}

- Determine whether the following are spanning sets

for R2:

**(a)**

⎧⎪⎩

2

1

⎫⎪⎭

,

⎧⎪⎩

3

2

⎫⎪⎭

_

**(b)**

⎧⎪⎩

2

3

⎫⎪⎭

,

⎧⎪⎩

4

6

⎫⎪⎭

_

**(c)**

⎧⎪⎩

−2

1

⎫⎪⎭

,

⎧⎪⎩

1

3

⎫⎪⎭

,

⎧⎪⎩

2

4

⎫⎪⎭

_

**(d)**

⎧⎪⎩

−1

2

⎫⎪⎭

,

⎧⎪⎩

1

−2

⎫⎪⎭

,

⎧⎪⎩

2

−4

⎫⎪

⎭

_

**(e)**

⎧⎪⎩

1

2

⎫⎪⎭

,

⎧⎪⎩

−1

1

⎫⎪⎭

_

- Which of the sets that follow are spanning sets for

R3? Justify your answers.

**(a) **{(1, 0, 0)*T *, (0, 1, 1)*T *, (1, 0, 1)*T *}

**(b) **{(1, 0, 0)*T *, (0, 1, 1)*T *, (1, 0, 1)*T *, (1, 2, 3)*T *}

**(c) **{(2, 1,−2)*T *, (3, 2,−2)*T *, (2, 2, 0)*T *}

**(d) **{(2, 1,−2)*T*, (−2,−1, 2)*T *, (4, 2,−4)*T *}

**(e) **{(1, 1, 3)*T *, (0, 2, 1)*T *}

- Given

**x**1 =

⎧⎪⎪⎪⎪⎪⎩

−1

2

3

⎫⎪⎪⎪⎪⎪⎭

, **x**2 =

⎧⎪⎪⎪⎪⎪⎩

3

4

2

⎫⎪⎪⎪⎪⎪⎭

,

**x **=

⎧⎪⎪⎪⎪⎪⎩

2

6

6

⎫⎪⎪⎪⎪⎪⎭, **y**

=

⎧⎪⎪⎪⎪⎪⎩

−9

−2

5

⎫⎪⎪⎪⎪⎪⎭

**(a) **Is **x **∈ Span(**x**1, **x**2)?

**(b) **Is **y **∈ Span(**x**1, **x**2)?

Prove your answers.

- Let
*A*be a 4 × 3 matrix and let**b**∈ R4. Howmany

possible solutions could the system *A***x **= **b **have if

*N*(*A*) = {**0**}? Answer the same question in the case

*N*(*A*) _= {**0**}. Explain your answers.

- Let
*A*be a 4 × 3 matrix and let

**c **= 2**a**1 + **a**2 + **a**3

**(a) **If *N*(*A*) = {**0**}, what can you conclude about

the solutions to the linear system *A***x **= **c**?

**(b) **If *N*(*A*) _= {**0**}, how many solutions will the

system *A***x **= **c **have? Explain.

- Let
**x**1 be a particular solution to a system*A***x**=**b**

and let {**z**1, **z**2, **z**3} be a spanning set for *N*(*A*). If

*Z *=

⎧⎩

**z**1 **z**2 **z**3

⎫⎭

,

show that **y **will be a solution to *A***x **= **b **if and only

if **y **= **x**1 + *Z***c **for some **c **∈ R3.

- Let {
**x**1,**x**2,*. . .*,**x***k*} be a spanning set for a vector

space *V*.

**(a) **If we add another vector, **x***k*+1, to the set, will

we still have a spanning set? Explain.

**(b) **If we delete one of the vectors, say, **x***k*, from the

set, will we still have a spanning set? Explain.

- In R2×2, let

*E*11 =

⎧⎪⎩1

0

0 0

⎫⎪⎭

, *E*12 =

⎧⎪⎩

0 1

0 0

⎫⎪⎭

*E*21 =

⎧⎪⎩

0 0

1 0

⎫⎪⎭

, *E*22 =

⎧⎪⎩

0 0

0 1

⎫⎪⎭

Show that *E*11, *E*12, *E*21, *E*22 span R2×2.

- Which of the sets that follow are spanning sets for

*P*3? Justify your answers.

**(a) **{1, *x*2, *x*2 − 2} **(b) **{2, *x*2, *x*, 2*x *+ 3}

**(c) **{*x *+ 2, *x *+ 1, *x*2 − 1} **(d) **{*x *+ 2, *x*2 − 1}

- Let
*S*be the vector space of infinite sequences

defined in Exercise 15 of Section 3.1. Let *S*0 be

the set of {*a**n*} with the property that *a**n *→ 0 as

*n*→∞. Show that *S*0 is a subspace of *S*.

- Prove that if
*S*is a subspace of R1, then either

*S *= {**0**} or *S *= R1.

- Let
*A*be an*n*×*n*matrix. Prove that the following

statements are equivalent.

**(a) ***N*(*A*) = {**0**}. **(b) ***A *is nonsingular.

**(c) **For each **b **∈ R*n*, the system *A***x **= **b **has a

unique solution.

- Let
*U*and*V*be subspaces of a vector space*W*.

Prove that their intersection *U*∩*V *is also a subspace

of *W*.

- Let
*S*be the subspace of R2 spanned by**e**1 and let

*T *be the subspace of R2 spanned by **e**2. Is *S *∪ *T *a

subspace of R2? Explain.

- Let
*U*and*V*be subspaces of a vector space*W*.

Define

*U *+ *V *= {**z **| **z **= **u **+ **v **where **u **∈ *U *and **v **∈ *V*}

Show that *U *+ *V *is a subspace of *W*.

- Let
*S*,*T*, and*U*be subspaces of a vector space *V*. We can form new subspaces using the operations

of ∩ and + defined in Exercises 23 and 25.

When we do arithmetic with numbers, we know

that the operation of multiplication distributes over

the operation of addition in the sense that

*a*(*b *+ *c*) = *ab *+ *ac*

It is natural to ask whether similar distributive laws

hold for the two operations with subspaces.

**(a) **Does the intersection operation for subspaces

distribute over the addition operation? That is,

does

*S *∩ (*T *+ *U*) = (*S *∩ *T*) + (*S *∩ *U*)

**(b) **Does the addition operation for subspaces distribute

over the intersection operation? That is,

does

*S *+ (*T *∩ *U*) = (*S *+ *T*) ∩ (*S *+ *U*)

SECTION 3.3 EXERCISES

- Determine whether the following vectors are linearly

independent in R2:

**(a)**

⎧⎪⎩

2

1

⎫⎪⎭

,

⎧⎪⎩

3

2

⎫⎪⎭

**(b)**

⎧⎪⎩

2

3

⎫⎪⎭

,

⎧⎪⎩

4

6

⎫⎪⎭

**(c)**

⎧⎪⎩

−2

1

⎫⎪⎭

,

⎧⎪⎩

1

3

⎫⎪⎭

,

⎧⎪⎩

2

4

⎫⎪⎭

**(d)**

⎧⎪⎩

−1

2

⎫⎪⎭

,

⎧⎪⎩

1

−2

⎫⎪⎭

,

⎧⎪⎩

2

−4

⎫⎪⎭

**(e)**

⎧⎪⎩

1

2

⎫⎪⎭

,

⎧⎪⎩

−1

1

⎫⎪⎭

- Determine whether the following vectors are linearly

independent in R3:

**(a)**

⎧⎪⎪⎪⎪⎪⎩

1

0

0

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

0

1

1

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

1

0

1

⎫⎪⎪⎪⎪⎪⎭

**(b)**

⎧⎪⎪⎪⎪⎪⎩

1

0

0

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

0

1

1

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

1

0

1

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

1

2

3

⎫⎪

⎪⎪⎪⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

2

1

−2

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

3

2

−2

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

2

2

0

⎫⎪⎪⎪⎪⎪⎭ **(**

**d)**

⎧⎪⎪⎪⎪⎪⎩

2

1

−2

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

−2

−1

2

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

4

2

−4

⎫⎪⎪⎪⎪⎪⎭

**(e)**

⎧⎪⎪⎪⎪⎪⎩

1

1

3

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

0

2

1

⎫⎪⎪⎪⎪⎪⎭

- For each of the sets of vectors in Exercise 2, describe

geometrically the span of the given vectors.

- Determine whether the following vectors are linearly

independent in R2×2:

**(a)**

⎧⎪⎩

1 0

1 1

⎫⎪⎭

,

⎧⎪⎩

0 1

0 0

⎫⎪⎭

**(b)**

⎧⎪⎩

1 0

0 1

⎫⎪⎭

,

⎧⎪⎩

0 1

0 0

⎫⎪⎭

,

⎧⎪⎩

0 0

1 0

⎫⎪⎭

**(c)**

⎧⎪⎩

1 0

0 1

⎫⎪⎭

,

⎧⎪⎩

0 1

0 0

⎫⎪⎭

,

⎧⎪⎩

2 3

0 2

⎫⎪⎭

- Let
**x**1,**x**2,*. . .*,**x***k*be linearly independent vectors in

a vector space *V*.

**(a) **If we add a vector **x***k*+1 to the collection, will

we still have a linearly independent collection

of vectors? Explain.

**(b) **If we delete a vector, say, **x***k*, from the collection,

will we still have a linearly independent

collection of vectors? Explain.

- Let
**x**1,**x**2, and**x**3 be linearly independent vectors

in R*n *and let

**y**1

= **x**1 + **x**2, **y**2

= **x**2 + **x**3, **y**3

= **x**3 + **x**1

Are **y**1, **y**2, and **y**3 linearly independent? Prove your

answer.

- Let
**x**1,**x**2, and**x**3 be linearly independent vectors

in R*n *and let

**y**1

= **x**2 − **x**1, **y**2

= **x**3 − **x**2, **y**3

= **x**3 − **x**1

Are **y**1, **y**2, and **y**3 linearly independent? Prove your

answer.

- Determine whether the following vectors are linearly

independent in *P*3:

**(a) **1, *x*2, *x*2 − 2 **(b) **2, *x*2, *x*, 2*x *+ 3

**(c) ***x *+ 2, *x *+ 1, *x*2 − 1 **(d) ***x *+ 2, *x*2 − 1

- For each of the following, show that the given

vectors are linearly independent in *C*[0, 1]:

**(a) **cos *π**x*, sin*π**x ***(b) ***x*3*/*2, *x*5*/*2

**(c) **1, *e**x *+ *e*−*x*, *e**x *− *e*−*x ***(d) ***e**x*, *e*−*x*, *e*2*x*

- Determine whether the vectors cos
*x*, 1, and

sin2(*x**/*2) are linearly independent in *C*[−*π*, *π*].

- Consider the vectors cos(
*x*+*α*) and sin*x*in

*C*[−*π*, *π*]. For what values of *α *will the two vectors

be linearly dependent? Give a graphical interpretation

of your answer.

- Given the functions 2
*x*and |*x*|, show that

**(a) **these two vectors are linearly independent in

*C*[−1, 1].

**(b) **the vectors are linearly dependent in *C*[0, 1].

- Prove that any finite set of vectors that contains the

zero vector must be linearly dependent.

- Let
**v**1, and**v**2 be two vectors in a vector space*V*.

Show that **v**1 and **v**2 are linearly dependent if and

only if one of the vectors is a scalar multiple of the

other.

- Prove that any nonempty subset of a linearly independent

set of vectors {**v**1, *. . . *, **v***n*} is also linearly

independent.

- Let
*A*be an*m*×*n*matrix. Show that if*A*has linearly

independent column vectors, then *N*(*A*) = {**0**}.

[*Hint*: For any **x **∈ R*n*,

*A***x **= *x*1**a**1 + *x*2**a**2 +· · ·+*x**n***a***n*].

- Let
**x**1,*. . .*,**x***k*be linearly independent vectors in

R*n*, and let *A *be a nonsingular *n*×*n *matrix. Define

**y***i*

= *A***x***i *for *i *= 1, *. . . *, *k*. Show that **y**1, *. . . *, **y***k *are

linearly independent.

- Let
*A*be a 3 × 3 matrix and let**x**1,**x**2,**x**3 be vectors

in R3. Show that if the vectors

**y**1

= *A***x**1, **y**2

= *A***x**2, **y**3

= *A***x**3

are linearly independent, then the matrix *A *must be

nonsingular and the vectors **x**1, **x**2, and **x**3 must be

linearly independent.

- Let {
**v**1,*. . .*,**v***n*} be a spanning set for the vector

space *V*, and let **v **be any other vector in *V*. Show

that **v**, **v**1, *. . . *, **v***n *are linearly dependent.

- Let
**v**1,**v**2,*. . .*,**v***n*be linearly independent vectors

in a vector space *V*. Show that **v**2, *. . . *, **v***n *cannot

span *V*.

SECTION 3.4 EXERCISES

- In Exercise 1 of Section 3.3, indicate whether the

given vectors form a basis for R2.

- In Exercise 2 of Section 3.3, indicate whether the

given vectors form a basis for R3.

- Consider the vectors

**x**1 =

⎧⎪⎩

2

1

⎫⎪⎭

, **x**2 =

⎧⎪⎩

4

3

⎫⎪⎭

, **x**3 =

⎧⎪⎩

7

−3

⎫⎪⎭

**(a) **Show that **x**1 and **x**2 form a basis for R2.

**(b) **Why must **x**1, **x**2, **x**3 be linearly dependent?

**(c) **What is the dimension of Span(**x**1, **x**2, **x**3)?

- Given the vectors

**x**1 =

⎧⎪⎪⎪⎪⎪⎩

3

−2

4

⎫⎪⎪⎪⎪⎪⎭

, **x**2 =

⎧⎪⎪⎪⎪⎪⎩

−3

2

−4

⎫⎪⎪⎪⎪⎪⎭

, **x**3 =

⎧⎪⎪⎪⎪⎪⎩

−6

4

−8

⎫⎪⎪⎪⎪⎪⎭ what is the dimension of Span(**x**1, **x**2, **x**3)?

- Let

**x**1 =

⎧⎪⎪⎪⎪⎪⎩

2

1

3

⎫⎪⎪⎪⎪⎪⎭

, **x**2 =

⎧⎪⎪⎪⎪⎪⎩

3

−1

4

⎫⎪⎪⎪⎪⎪⎭

, **x**3 =

⎧⎪⎪⎪⎪⎪⎩

2

6

4

⎫⎪⎪⎪⎪⎪⎭

**(a) **Show that **x**1, **x**2, and **x**3 are linearly dependent.

**(b) **Show that **x**1 and **x**2 are linearly independent.

**(c) **What is the dimension of Span(**x**1, **x**2, **x**3)?

**(d) **Give a geometric description of

Span(**x**1, **x**2, **x**3).

- In Exercise 2 of Section 3.2, some of the sets

formed subspaces of R3. In each of these cases,

find a basis for the subspace and determine its

dimension.

- Find a basis for the subspace
*S*of R4 consisting of

all vectors of the form (*a *+ *b*, *a *− *b *+ 2*c*, *b*, *c*)*T *,

where *a*, *b*, and *c *are all real numbers. What is the

dimension of *S*?

- Given
**x**1 = (1, 1, 1)*T*and**x**2 = (3, −1, 4)*T*:

**(a) **Do **x**1 and **x**2 span R3? Explain.

**(b) **Let **x**3 be a third vector in R3 and set *X *=

( **x**1 **x**2 **x**3 ). What condition(s) would *X *have

to satisfy in order for **x**1, **x**2, and **x**3 to form a

basis for R3?

**(c) **Find a third vector **x**3 that will extend the set

{**x**1, **x**2} to a basis for R3.

- Let
**a**1 and**a**2 be linearly independent vectors in R3,

and let **x **be a vector in R2.

**(a) **Describe geometrically Span(**a**1, **a**2).

**(b) **If *A *= (**a**1, **a**2) and **b **= *A***x**, then what is the

dimension of Span(**a**1, **a**2, **b**)? Explain.

- The vectors

**x**1 =

⎧⎪⎪⎪⎪⎪⎩

1

2

2

⎫⎪⎪⎪⎪⎪⎭

, **x**2 =

⎧⎪⎪⎪⎪⎪⎩

2

5

4

⎫⎪⎪⎪⎪⎪⎭

,

**x**3 =

⎧⎪⎪⎪⎪⎪⎩

1

3

2

⎫⎪⎪⎪⎪⎪⎭

, **x**4 =

⎧⎪⎪⎪⎪⎪⎩

2

7

4

⎫⎪⎪⎪⎪⎪⎭

, **x**5 =

⎧⎪⎪⎪⎪⎪⎩

1

1

0

⎫⎪⎪⎪⎪⎪⎭

span R3. Pare down the set {**x**1, **x**2, **x**3, **x**4, **x**5} to

form a basis for R3.

- Let
*S*be the subspace of*P*3 consisting of all polynomials

of the form *ax*2 + *bx *+ 2*a *+ 3*b*. Find a basis

for *S*.

- In Exercise 3 of Section 3.2, some of the sets

formed subspaces of R2×2. In each of these cases,

find a basis for the subspace and determine its

dimension.

- In
*C*[−*π*,*π*], find the dimension of the subspace

spanned by 1, cos 2*x*, cos2 *x*.

- In each of the following, find the dimension of the

subspace of *P*3 spanned by the given vectors:

*x*,*x*− 1,*x*2 + 1

**(b) ***x*, *x *− 1, *x*2 + 1, *x*2 − 1

**(c) ***x*2, *x*2 − *x *− 1, *x *+ 1 **(d) **2*x*, *x *− 2

- Let
*S*be the subspace of*P*3 consisting of all polynomials

*p*(*x*) such that *p*(0) = 0, and let *T *be the

subspace of all polynomials *q*(*x*) such that *q*(1) =

- Find bases for

**(a) ***S ***(b) ***T ***(c) ***S *∩ *T*

- In R4, let
*U*be the subspace of all vectors of the

form (*u*1, *u*2, 0, 0)*T *, and let *V *be the subspace of

all vectors of the form (0, *v*2, *v*3, 0)*T *. What are the

dimensions of *U*, *V*, *U *∩ *V*, *U *+ *V*? Find a basis

for each of these four subspaces. (See Exercises 23

and 25 of Section 3.2.)

- Is it possible to find a pair of two-dimensional

subspaces *U *and *V *of R3 whose intersection

is {**0**}? Prove your answer. Give a geometrical

interpretation of your conclusion. *Hint*: Let

{**u**1, **u**2} and {**v**1, **v**2} be bases for *U *and *V*, respectively.

Show that **u**1, **u**2, **v**1, **v**2 are linearly

dependent.

- Show that if
*U*and*V*are subspaces of R*n*and

*U *∩ *V *= {**0**}, then

- dim (
*U*+*V*) = dim*U*+ dim*V* - SECTION 3.5 EXERCISES
- For each of the following, find the transition matrix
- corresponding to the change of basis from {
**u**1,**u**2} - to {
**e**1,**e**2}. **(a) u**1 = (1, 1)*T*,**u**2 = (−1, 1)*T***(b) u**1 = (1, 2)*T*,**u**2 = (2, 5)*T***(c) u**1 = (0, 1)*T*,**u**2 = (1, 0)*T*- For each of the ordered bases {
**u**1,**u**2} in Exercise - 1, find the transition matrix corresponding to
- the change of basis from {
**e**1,**e**2} to {**u**1,**u**2}. - Let
**v**1 = (3, 2)*T*and**v**2 = (4, 3)*T*. For each ordered - basis {
**u**1,**u**2} given in Exercise 1, find the transition - matrix from {
**v**1,**v**2} to {**u**1,**u**2}. - Let
*E*= [(5, 3)*T*, (3, 2)*T*] and let**x**= (1, 1)*T*, **y**= (1,−1)*T*, and**z**= (10, 7)*T*. Determine the- values of [
**x**]*E*, [**y**]*E*, and [**z**]*E*. - Let
**u**1 = (1, 1, 1)*T*,**u**2 = (1, 2, 2)*T*, and**u**3 = - (2, 3, 4)
*T*. **(a)**Find the transition matrix corresponding to the- change of basis from {
**e**1,**e**2,**e**3} to {**u**1,**u**2,**u**3}. **(b)**Find the coordinates of each of the following- vectors with respect to the ordered basis
- {
**u**1,**u**2,**u**3}. **(i)**(3, 2, 5)*T***(ii)**(1, 1, 2)*T***(iii)**(2, 3, 2)*T*- Let
**v**1 = (4, 6, 7)*T*,**v**2 = (0, 1, 1)*T*, and**v**3 = - (0, 1, 2)
*T*, and let**u**1,**u**2, and**u**3 be the vectors given - in Exercise 5.
**(a)**Find the transition matrix from {**v**1,**v**2,**v**3} to- {
**u**1,**u**2,**u**3}. **(b)**If**x**= 2**v**1 +3**v**2 −4**v**3, determine the coordinates- of
**x**with respect to {**u**1,**u**2,**u**3}. - Given

** **

find vectors **u**1 and **u**2 so that *S *will be the transition

matrix from {**v**1, **v**2} to {**u**1, **u**2}.

- Let [
*x*, 1] and [2*x*− 1, 2*x*+ 1] be ordered bases for

*P*2.

**(a) **Find the transition matrix representing the

change in coordinates from [2*x *− 1, 2*x *+ 1]

to [*x*, 1].

**(b) **Find the transition matrix representing

the change in coordinates from [*x*, 1] to

[2*x *− 1, 2*x *+ 1].

- Find the transition matrix representing the change

of coordinates on *P*3 from the ordered basis

[1, *x*, *x*2] to the ordered basis

[1, 1 + *x*, 1 + *x *+ *x*2]

- Let
*E*= {**u**1,*. . .*,**u***n*} and*F*= {**v**1,*. . .*,**v***n*} be two

ordered bases for R*n*, and set

*U *= (**u**1, *. . . *, **u***n*), *V *= (**v**1, *. . . *, **v***n*)

Show that the transition matrix from *E *to *F *can be

determined by calculating the reduced row echelon

form of (*V*|*U*).

SECTION 3.6 EXERCISES

- For each of the following matrices, find a basis for

the row space, a basis for the column space, and a

basis for the null space.

**(a)**

⎧⎪⎪⎪⎪⎪⎩

1 3 2

2 1 4

4 7 8

⎫⎪⎪⎪⎪⎪⎭

**(b)**

⎧⎪⎪⎪⎪⎪⎩

−3 1 3 4

1 2 −1 −2

−3 8 4 2

⎫⎪⎪⎪⎪⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

1 3 −2 1

2 1 3 2

3 4 5 6

⎫⎪⎪⎪⎪⎪⎭

- In each of the following, determine the dimension

of the subspace of R3 spanned by the given vectors.

**(a)**

⎧⎪⎪⎪⎪⎪⎩

1

−2

2

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

2

−2

4

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

−3

3

6

⎫⎪⎪⎪⎪⎪⎭

**(b)**

⎧⎪⎪⎪⎪⎪⎩

1

1

1

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

1

2

3

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

2

3

1

⎫⎪⎪⎪⎪⎪⎭

**(c)**

⎧

⎪⎪⎪⎪⎪⎩

1

−1

2

⎫⎪⎪⎪⎪⎪⎭

,

⎧

⎪⎪⎪⎪⎪⎩

−2

2

−4

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

3

−2

5

⎫⎪⎪⎪⎪⎪⎭

,

⎧⎪⎪⎪⎪⎪⎩

2

−1

3

⎫⎪⎪⎪⎪⎪⎭

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

1 2 2 3 1 4

2 4 5 5 4 9

3 6 7 8 5 9

⎫⎪⎪⎪⎪⎪⎭

**(a) **Compute the reduced row echelon form *U *of

*A*. Which column vectors of*U*correspond to

the free variables? Write each of these vectors

as a linear combination of the column vectors

corresponding to the lead variables.

**(b) **Which column vectors of *A *correspond to the

lead variables of *U*? These column vectors

form a basis for the column space of *A*. Write

each of the remaining column vectors of *A *as a

linear combination of these basis vectors.

- For each of the following choices of
*A*and**b**, determine

whether **b **is in the column space of *A *and

state whether the system *A***x **= **b **is consistent:

**(a) ***A *=

⎧⎪⎩

1 2

2 4

⎫⎪⎭

, **b **=

⎧⎪⎩

4

8

⎫⎪⎭

**(b) ***A *=

⎧⎪⎩

3 6

1 2

⎫⎪⎭

, **b **=

⎧⎪⎩

1

1

⎫⎪⎭

**(c) ***A *=

⎧⎪⎩

2 1

3 4

⎫⎪⎭

, **b **=

⎧⎪⎩

4

6

⎫⎪⎭

**(d) ***A *=

⎧⎪⎪⎪⎪⎪⎩

1 1 2

1 1 2

1 1 2

⎫⎪⎪⎪⎪⎪⎭

, **b **=

⎧⎪⎪⎪⎪⎪⎩

1

2

3

⎫⎪⎪⎪⎪⎪⎭

**(e) ***A *=

⎧⎪⎪⎪⎪⎪⎩

0 1

1 0

0 1

⎫⎪⎪⎪⎪⎪⎭

, **b **=

⎧⎪⎪⎪⎪⎪⎩

2

5

2

⎫⎪⎪⎪⎪⎪⎭

**(f) ***A *=

⎧⎪⎪⎪⎪⎪⎩

1 2

2 4

1 2

⎫⎪⎪⎪⎪⎪⎭

, **b **=

⎧⎪⎪⎪⎪⎪⎩

5

10

5

⎫⎪⎪⎪⎪⎪⎭

- For each consistent system in Exercise 4, determine

whether there will be one or infinitely many

solutions by examining the column vectors of the

coefficient matrix *A*.

- How many solutions will the linear system
*A***x**=**b**

have if **b **is in the column space of *A *and the column

vectors of *A *are linearly dependent? Explain.

- Let
*A*be a 6 ×*n*matrix of rank*r*and let**b**be a vector

in R6. For each choice of *r *and *n *that follows,

indicate the possibilities as to the number of solutions

one could have for the linear system *A***x **= **b**.

Explain your answers.

**(a) ***n *= 7, *r *= 5 **(b) ***n *= 7, *r *= 6

**(c) ***n *= 5, *r *= 5 **(d) ***n *= 5, *r *= 4

- Let
*A*be an*m*×*n*matrix with*m**>**n*. Let**b**∈ R*m*

and suppose that *N*(*A*) = {**0**}.

**(a) **What can you conclude about the column vectors

of *A*? Are they linearly independent? Do

they span R*m*? Explain.

**(b) **How many solutions will the system *A***x **= **b**

have if **b **is not in the column space of *A*?

How many solutions will there be if **b **is in the

column space of *A*? Explain.

- Let
*A*and*B*be 6 × 5 matrices. If dim*N*(*A*) = 2,

what is the rank of *A*? If the rank of *B *is 4, what is

the dimension of *N*(*B*)?

- Let
*A*be an*m*×*n*matrix whose rank is equal to*n*.

If *A***c **= *A***d**, does this imply that **c **must be equal

to **d**? What if the rank of *A *is less than *n*? Explain

your answers.

- Let
*A*be an*m*×*n*matrix. Prove that

rank(*A*) ≤ min(*m*, *n*)

- Let
*A*and*B*be row equivalent matrices.

**(a) **Show that the dimension of the column space

of *A *equals the dimension of the column space

of *B*.

**(b) **Are the column spaces of the two matrices

necessarily the same? Justify your answer.

- Let
*A*be a 4 × 3matrix and suppose that the vectors

**z**1 =

⎧⎪⎪⎪⎪⎪⎩

1

1

2

⎫⎪⎪⎪⎪⎪⎭

, **z**2 =

⎧⎪⎪⎪⎪⎪⎩

1

0

−1

⎫⎪⎪⎪⎪⎪⎭

form a basis for *N*(*A*). If **b **= **a**1 +2**a**2 +**a**3, find all

solutions of the system *A***x **= **b**.

- Let
*A*be a 4 × 4 matrix with reduced row echelon

form given by

*U *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 0 2 1

0 1 1 4

0 0 0 0

0 0 0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

If

**a**1 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

−3

5

2

1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

and **a**2 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

4

−3

7

−1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

find **a**3 and **a**4.

- Let
*A*be a 4 × 5 matrix and let*U*be the reduced

row echelon form of *A*. If

**a**1 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

2

1

−3

−2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

, **a**2 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

−1

2

3

1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

,

*U *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 0 2 0 −1

0 1 3 0 −2

0 0 0 1 5

0 0 0 0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **find a basis for *N*(*A*).

**(b) **given that **x**0 is a solution to *A***x **= **b**, where

**b **=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

0

5

3

4

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

and **x**0 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

3

2

0

2

0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(i) **find all solutions to the system.

**(ii) **determine the remaining column vectors

of *A*.

- Let
*A*be a 5 × 8 matrix with rank equal to 5 and

let **b **be any vector in R5. Explain why the system

*A***x **= **b **must have infinitely many solutions.

- Let
*A*be a 4 × 5 matrix. If**a**1,**a**2, and**a**4 are linearly

independent and

**a**3 = **a**1 + 2**a**2, **a**5 = 2**a**1 − **a**2 + 3**a**4

determine the reduced row echelon form of *A*.

- Let
*A*be a 5 × 3 matrix of rank 3 and let {**x**1,**x**2,**x**3}

be a basis for R3.

**(a) **Show that *N*(*A*) = {**0**}.

**(b) **Show that if **y**1

= *A***x**1, **y**2

= *A***x**2, and

**y**3

= *A***x**3 then **y**1, **y**2, and **y**3 are linearly

independent.

**(c) **Do the vectors **y**1, **y**2, **y**3 from part (b) form a

basis for R5? Explain.

- Let
*A*be an*m*×*n*matrix with rank equal to*n*. Show

that if **x **_= **0 **and **y **= *A***x**, then **y **_= **0**.

- Prove that a linear system
*A***x**=**b**is consistent if

and only if the rank of (*A *| **b**) equals the rank of *A*.

- Let
*A*and*B*be*m*×*n*matrices. Show that

rank(*A *+ *B*) ≤ rank(*A*) + rank(*B*)

- Let
*A*be an*m*×*n*matrix.

**(a) **Show that if *B *is a nonsingular *m *× *m *matrix,

then *BA *and *A *have the same null space and

hence the same rank.

**(b) **Show that if *C *is a nonsingular *n *× *n *matrix,

then *AC *and *A *have the same rank.

- Prove Corollary 3.6.4.
- Show that if
*A*and*B*are*n*×*n*matrices and

*N*(*A *− *B*) = R*n *then *A *= *B*.

- Let
*A*and*B*be*n*×*n*matrices.

**(a) **Show that *AB *= *O *if and only if the column

space of *B *is a subspace of the null space of *A*.

**(b) **Show that if *AB *= *O*, then the sum of the ranks

of *A *and *B *cannot exceed *n*.

- Let
*A*∈ R*m*×*n*and**b**∈ R*m*, and let**x**0 be a particular

solution of the system *A***x **= **b**. Prove that if

*N*(*A*) = {**0**}, then the solution **x**0 must be unique.

- Let
**x**and**y**be nonzero vectors in R*m*and R*n*,

respectively, and let *A *= **xy***T *.

**(a) **Show that {**x**} is a basis for the column space

of *A *and that {**y***T *} is a basis for the row space

of *A*.

**(b) **What is the dimension of *N*(*A*)?

- Let
*A*∈ R*m*×*n*,*B*∈ R*n*×*r*, and*C*=*AB*. Show that

**(a) **the column space of *C *is a subspace of the

column space of *A*.

**(b) **the row space of *C *is a subspace of the row

space of *B*.

**(c) **rank(*C*) ≤ min{rank(*A*), rank(*B*)}.

- Let
*A*∈ R*m*×*n*,*B*∈ R*n*×*r*, and*C*=*AB*. Show that

**(a) **if *A *and *B *both have linearly independent

column vectors, then the column vectors of *C*

will also be linearly independent.

**(b) **if *A *and *B *both have linearly independent row

vectors, then the row vectors of *C *will also be

linearly independent.

[*Hint*: Apply part (a) to *C**T *].

- Let
*A*∈ R*m*×*n*,*B*∈ R*n*×*r*, and*C*=*AB*. Show that

**(a) **if the column vectors of *B *are linearly dependent,

then the column vectors of *C *must be

linearly dependent.

**(b) **if the row vectors of *A *are linearly dependent,

then the row vectors of *C *are linearly

dependent.

[*Hint*: Apply part (a) to *C**T *].

- An
*m*×*n*matrix*A*is said to have a*right inverse*if

there exists an *n *× *m *matrix *C *such that *AC *= *I**m*.

The matrix *A *is said to have a *left inverse *if there

exists an *n *× *m *matrix *D *such that *DA *= *I**n*.

**(a) **Show that if *A *has a right inverse, then the

column vectors of *A *span R*m*.

**(b) **Is it possible for an *m*×*n *matrix to have a right

inverse if *n **< **m*? *n *≥ *m*? Explain.

- Prove: If
*A*is an*m*×*n*matrix and the column vectors

of *A *span R*m*, then *A *has a right inverse. *Hint*:

Let **e***j *denote the *j*th column of *I**m *and solve *A***x **= **e***j*

for *j *= 1, *. . . *,*m*.

- Show that a matrix
*B*has a left inverse if and only

if *B**T *has a right inverse.

- Let
*B*be an*n*×*m*matrix whose columns are

linearly independent. Show that *B *has a left

inverse.

- Prove that if a matrix
*B*has a left inverse then the

columns of *B *are linearly independent.

- Show that if a matrix
*U*is in row echelon form, then

the nonzero row vectors of *U *form a basis for the

row space of *U*.

**MATLAB EXERCISES**

- (Change of Basis) Set

*U *= **round**(20 ∗ **rand**(4)) − 10,

*V *= **round**(10 ∗ **rand**(4))

and set **b **= **ones**(4, 1).

**(a) **We can use the MATLAB function **rank **to determine

whether the column vectors of a matrix

are linearly independent. What should the rank

be if the column vectors of *U *are linearly independent?

Compute the rank of *U*, and verify that

its column vectors are linearly independent and

hence form a basis for R4. Compute the rank of

*V*, and verify that its column vectors also form

a basis for R4.

**(b) **Use MATLAB to compute the transition matrix

from the standard basis for R4 to the ordered

basis *E *= {**u**1, **u**2, **u**3, **u**4}. [Note that in MATLAB

the notation for the *j*th column vector **u***j*

is *U*(: , *j*).] Use this transition matrix to compute

the coordinate vector **c **of **b **with respect to *E*.

Verify that

**b **= *c*1**u**1 + *c*2**u**2 + *c*3**u**3 + *c*4**u**4 = *U***c**

**(c) **Use MATLAB to compute the transition matrix

from the standard basis to the ordered basis

*F *= {**v**1, **v**2, **v**3, **v**4}, and use this transition matrix

to find the coordinate vector **d **of **b **with

respect to *F*. Verify that

**b **= *d*1**v**1 + *d*2**v**2 + *d*3**v**3 + *d*4**v**4 = *V***d**

**(d) **Use MATLAB to compute the transition matrix

*S *from *E *to *F *and the transition matrix *T *from

*F *to *E*. How are *S *and *T *related? Verify that

*S***c **= **d **and *T***d **= **c**.

- (Rank-Deficient Matrices) In this exercise we consider

how to use MATLAB to generate matrices with

specified ranks.

**(a) **In general, if *A *is an *m *× *n *matrix with rank

*r*, then *r *≤ min(*m*, *n*). Why? Explain. If the

entries of *A *are random numbers, we would expect

that *r *= min(*m*, *n*). Why? Explain. Check

this out by generating random 6 × 6, 8 × 6, and

5 × 8 matrices and using the MATLAB command

**rank **to compute their ranks. Whenever

the rank of an *m*×*n *matrix equals min(*m*, *n*), we

say that the matrix has *full rank*. Otherwise, we

say that the matrix is *rank deficient*.

**(b) **MATLAB’s **rand **and **round **commands can

be used to generate random *m *× *n *matrices with

integer entries in a given range [*a*, *b*]. This can

be done with a command of the form

*A *= **round**((*b *− *a*) ∗ **rand**(*m*, *n*)) + *a*

For example, the command

*A *= **round**(4 ∗ **rand**(6, 8)) + 3

will generate a 6 × 8 matrix whose entries are

random integers in the range from 3 to 7. Using

the range [1, 10], create random integer 10 × 7,

8 × 12, and 10 × 15 matrices and in each case

check the rank of the matrix. Do these integer

matrices all have full rank?

**(c) **Suppose that we want to use MATLAB to generate

matrices with less than full rank. It is easy

to generate matrices of rank 1. If **x **and **y **are

nonzero vectors in R*m *and R*n*, respectively, then

*A *= **xy***T *will be an *m *× *n *matrix with rank

- Why? Explain. Verify this in MATLAB by

setting

**x **= **round**(9 ∗ **rand**(8, 1)) + 1,

**y **= **round**(9 ∗ **rand**(6, 1)) + 1

and using these vectors to construct an 8×6matrix

*A*. Check the rank of*A*with the MATLAB

command **rank**.

**(d) **In general,

rank(*AB*) ≤ min(rank(*A*), rank(*B*)) (1)

(See Exercise 28 in Section 3.6.) If *A *and *B*

are noninteger random matrices, the relation (1)

should be an equality. Generate an 8 ×6 matrix

*A *by setting

*X *= **rand**(8, 2), *Y *= **rand**(6, 2),

*A *= *X *∗ *Y*_

What would you expect the rank of *A *to be?

Explain. Test the rank of *A *with MATLAB.

**(e) **Use MATLAB to generate matrices *A*, *B*, and *C*

such that

**(i) ***A *is 8 × 8 with rank 3.

**(ii) ***B *is 6 × 9 with rank 4.

**(iii) ***C *is 10 × 7 with rank 5.

- (Column Space and Reduced Row Echelon Form)

Set

*B *= **round**(10 ∗ **rand**(8, 4))

*X *= **round**(10 ∗ **rand**(4, 3))

*C *= *B *∗ *X*

*A *= [*B C*]

**(a) **How are the column spaces of *B *and *C *related?

(See Exercise 28 in Section 3.6.) What would

you expect the rank of *A *to be? Explain. Use

MATLAB to check your answer.

**(b) **Which column vectors of *A *should form a basis

for its column space? Explain. If *U *is the reduced

row echelon form of *A*, what would you

expect its first four columns to be? Explain.

What would you expect its last four rows to be?

Explain. Use MATLAB to verify your answers

by computing *U*.

**(c) **Use MATLAB to construct another matrix

*D *= (*E EY*), where *E *is a random 6×4 matrix

and *Y *is a random 4×2 matrix. What would

you expect the reduced row echelon form of *D*

to be? Compute it with MATLAB. Show that, in

general, if *B *is an *m *× *n *matrix of rank *n *and *X*

is an *n*×*k *matrix, the reduced row echelon form

of (*B BX*) will have block structure

(*I X*) if *m *= *n *or

⎧⎪⎩

*I X*

*O O*

⎫⎪⎭

if *m **> **n*

- (Rank-1 Updates of Linear Systems)

**(a) **Set

*A *= **round**(10 ∗ **rand**(8))

**b **= **round**(10 ∗ **rand**(8, 1))

*M *= **inv**(*A*)

Use the matrix *M *to solve the system *A***y **= **b**

for **y**.

**(b) **Consider now a new system *C***x **= **b**, where *C *is

constructed as follows:

**u **= **round**(10 ∗ **rand**(8, 1))

**v **= **round**(10 ∗ **rand**(8, 1))

*E *= **u **∗ **v**_

*C *= *A *+ *E*

The matrices *C *and *A *differ by the rank-1 matrix

*E*. Use MATLAB to verify that the rank of*E*

is 1. Use MATLAB’s “” operation to solve the

system *C***x **= **b **and then compute the residual

vector **r **= **b **− *A***x**.

**(c) **Let us now solve *C***x **= **b **by a new method that

takes advantage of the fact that *A *and *C *differ by

a rank-1 matrix. This new procedure is called a

*rank-1 update *method. Set

**z **= *M *∗ **u**, *c *= **v **_ ∗ **y**,

*d *= **v **_ ∗ **z**, *e *= *c**/*(1 + *d*)

and then compute the solution **x **by

**x **= **y **− *e *∗ **z**

Compute the residual vector **b **− *C***x **and compare

it with the residual vector in part (b). This

new method may seem more complicated, but it

actually is much more computationally efficient.

**(d) **To see why the rank-1 update method works, use

MATLAB to compute and compare

*C***y **and **b **+ *c***u**

Prove that if all computations had been carried

out in exact arithmetic, these two vectors would

be equal. Also, compute

*C***z **and (1 + *d*)**u**

Prove that if all computations had been carried

out in exact arithmetic, these two vectors

would be equal. Use these identities to prove

that *C***x **= **b**. Assuming that *A *is nonsingular,

will the rank-1 update method always work?

Under what conditions could it fail? Explain.

**CHAPTER TEST A ****True or False**

*Answer each of the statements that follows as *true *or*

false*. In each case, explain or prove your answer.*

- If
*S*is a subspace of a vector space*V*, then*S*is a

vector space.

- R2 is a subspace of R3.
- It is possible to find a pair of two-dimensional

subspaces *S *and *T *of R3 such that *S *∩ *T *= {**0**}.

- If
*S*and*T*are subspaces of a vector space*V*, then

*S *∪ *T *is a subspace of *V*.

- If
*S*and*T*are subspaces of a vector space*V*, then

*S *∩ *T *is a subspace of *V*.

- If
**x**1,**x**2,*. . .*,**x***n*span R*n*, then they are linearly

independent.

- If
**x**1,**x**2,*. . .*,**x***n*span a vector space*V*, then they are

linearly independent.

- If
**x**1,**x**2,*. . .*,**x***k*are vectors in a vector space*V*and

Span(**x**1, **x**2, *. . . *, **x***k*) = Span(**x**1, **x**2, *. . . *, **x***k*−1)

then **x**1, **x**2, *. . . *, **x***k *are linearly dependent.

- If
*A*is an*m*×*n*matrix, then*A*and*A**T*have the

same rank.

- If
*A*is an*m*×*n*matrix, then*A*and*A**T*have the

same nullity.

- If
*U*is the reduced row echelon form of*A*, then*A*

and *U *have the same row space.

- If
*U*is the reduced row echelon form of*A*, then*A*

and *U *have the same column space.

- Let
**x**1,**x**2,. . . ,**x***k*be linearly independent vectors

in R*n*. If *k **< **n *and **x***k*+1 is a vector that is not in

Span(**x**1, **x**2, *. . . *, **x***k*), then the vectors **x**1, **x**2,. . . , **x***k*,

**x***k*+1 are linearly independent.

- Let {
**u**1,**u**2}, {**v**1,**v**2}, and {**w**1,**w**2}, be bases for

R2. If *X *is the transition matrix corresponding to

a change of basis from {**u**1, **u**2} to {**v**1, **v**2} and *Y*

is the transition matrix corresponding to a change

of basis from {**v**1, **v**2} to {**w**1,**w**2}, then *Z *= *XY *is

the transition matrix corresponding to the change

of basis from {**u**1, **u**2} to {**w**1,**w**2}.

- If
*A*and*B*are*n*×*n*matrices that have the same

rank, then the rank of *A*2 must equal the rank of *B*2.

**CHAPTER TEST B**

- In R3, let
**x**1 and**x**2 be linearly independent vectors

and let **x**3 = **0 **(the zero vector). Are **x**1, **x**2, and **x**3

linearly independent? Prove your answer.

- For each set that follows determine whether it is a

subspace of R2. Prove your answers.

**(a) ***S*1 =

**x **=

⎧⎪⎩

*x*1

*x*2

⎫⎪⎭

____

*x*1 + *x*2 = 0

_

**(b) ***S*2 =

**x **=

⎧⎪⎩

*x*1

*x*2

⎫⎪⎭

____

*x*1*x*2 = 0

_

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 3 1 3 4

0 0 1 1 1

0 0 2 2 2

0 0 3 3 3

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Find a basis for *N*(*A*) (the null space of *A*).

What is the dimension of *N*(*A*)?

**(b) **Find a basis for the column space of *A*. What

is the rank of *A*?

- How do the dimensions of the null space and

column space of a matrix relate to the number of

lead and free variables in the reduced row echelon

form of the matrix? Explain.

- Answer the following questions and, in each case,

give geometric explanations of your answers:

**(a) **Is it possible to have a pair of one-dimensional

subspaces *U*1 and *U*2 of R3 such that *U*1∩*U*2 =

{**0**}?

**(b) **Is it possible to have a pair of two-dimensional

subspaces *V*1 and *V*2 of R3 such that *V*1 ∩*V*2 =

{**0**}?

- Let S be the set of all symmetric 2 × 2 matrices

with real entries.

**(a) **Show that *S *is a subspace of R2×2.

**(b) **Find a basis for *S*.

- Let
*A*be a 6 × 4 matrix of rank 4.

**(a) **What is the dimension of *N*(*A*)? What is the

dimension of the column space of *A*?

**(b) **Do the column vectors of *A *span R6? Are

the column vectors of *A *linearly independent?

Explain your answers.

**(c) **How many solutions will the linear system

*A***x **= **b **have if **b **is in the column space of

*A*? Explain.

- Given the vectors

**x**1 =

⎧⎪⎪⎪⎪⎪⎩

1

2

2

⎫⎪⎪⎪⎪⎪⎭

, **x**2 =

⎧⎪⎪⎪⎪⎪⎩

1

3

3

⎫⎪⎪⎪⎪⎪⎭

,

**x**3 =

⎧⎪⎪⎪⎪⎪⎩

1

5

5

⎫⎪⎪⎪⎪⎪⎭

, **x**4 =

⎧⎪⎪⎪⎪⎪⎩

1

2

3

⎫⎪⎪⎪⎪⎪⎭

**(a) **Are **x**1, **x**2, **x**3, and **x**4 linearly independent in

R3? Explain.

**(b) **Do **x**1, **x**2 span R3? Explain.

**(c) **Do **x**1, **x**2, **x**3 span R3? Are they linearly independent?

Do they form a basis for R3? Explain.

**(d) **Do **x**1, **x**2, **x**4 span R3? Are they linearly independent?

Do they form a basis for R3? Explain

or prove your answers.

- Let
**x**1,**x**2, and**x**3 be linearly independent vectors in

R4 and let *A *be a nonsingular 4 × 4 matrix. Prove

that if

**y**1

= *A***x**1, **y**2

= *A***x**2, **y**3

= *A***x**3

then **y**1, **y**2, and **y**3 are linearly independent.

- Let
*A*be a 6 × 5 matrix with linearly independent

column vectors **a**1, **a**2, **a**3 and whose remaining

column vectors satisfy

**a**4 = **a**1 + 3**a**2 + **a**3, **a**5 = 2**a**1 − **a**3

**(a) **What is the dimension of *N*(*A*)? Explain.

**(b) **Determine the reduced row echelon form of *A*.

- Let {
**u**1,**u**2} and {**v**1,**v**2} be ordered bases for R2,

where

**u**1 =

⎧⎪⎩

1

3

⎫⎪⎭

, **u**2 =

⎧⎪⎩

2

7

⎫⎪⎭

and

**v**1 =

⎧⎪⎩

5

2

⎫⎪⎭

, **v**2 =

⎧⎪⎩

4

9

⎫⎪⎭

**(a) **Determine the transition matrix corresponding

to a change of basis from the standard

basis {**e**1, **e**2} to the ordered basis {**u**1, **u**2}. Use

this transition matrix to find the coordinates of

**x **= (1, 1)*T *with respect to {**u**1, **u**2}.

**(b) **Determine the transition matrix corresponding

to a change of basis from the ordered basis

{**v**1, **v**2} to the ordered basis {**u**1, **u**2}. Use this

transition matrix to find the coordinates of

**z **= 2**v**1 + 3**v**2 with respect to {**u**1, **u**2}.

** **

**Chapter 4 ****Linear Transformations**

SECTION 4.1 EXERCISES

- Show that each of the following are linear operators

on R2. Describe geometrically what each linear

transformation accomplishes.

**(a) ***L *(**x**) = (−*x*1, *x*2)*T ***(b) ***L *(**x**) = −**x**

**(c) ***L *(**x**) = (*x*2, *x*1)*T ***(d) ***L *(**x**) = 1

2 **x**

**(e) ***L *(**x**) = *x*2**e**2

- Let
*L*be the linear operator on R2 defined by

*L *(**x**) = (*x*1 cos *α *− *x*2 sin *α*, *x*1 sin *α *+ *x*2 cos *α*)*T*

Express *x*1, *x*2, and *L *(**x**) in terms of polar coordinates.

Describe geometrically the effect of the linear

transformation.

- Let
**a**be a fixed nonzero vector in R2. A mapping

of the form

*L *(**x**) = **x **+ **a**

is called a *translation*. Show that a translation is not

a linear operator. Illustrate geometrically the effect

of a translation.

- Let
*L*: R2 → R2 be a linear operator. If

*L *((1, 2)*T *) = (−2, 3)*T*

and

*L *((1,−1)*T *) = (5, 2)*T*

find the value of *L *((7, 5)*T *).

- Determine whether the following are linear transformations

from R3 into R2.

**(a) ***L *(**x**) = (*x*2, *x*3)*T ***(b) ***L *(**x**) = (0, 0)*T*

**(c) ***L *(**x**) = (1 + *x*1, *x*2)*T*

**(d) ***L *(**x**) = (*x*3, *x*1 + *x*2)*T*

- Determine whether the following are linear transformations

from R2 into R3.

**(a) ***L *(**x**) = (*x*1, *x*2, 1)*T*

**(b) ***L *(**x**) = (*x*1, *x*2, *x*1 + 2*x*2)*T*

**(c) ***L *(**x**) = (*x*1, 0, 0)*T*

**(d) ***L *(**x**) = (*x*1, *x*2, *x*2

1

+ *x*2

2)*T*

- Determine whether the following are linear operators

on R*n*×*n*.

**(a) ***L *(*A*) = 2*A ***(b) ***L *(*A*) = *A**T*

**(c) ***L *(*A*) = *A *+ *I ***(d) ***L *(*A*) = *A *− *A**T*

- Let
*C*be a fixed*n*×*n*matrix. Determine whether

the following are linear operators on R*n*×*n*:

**(a) ***L *(*A*) = *CA *+ *AC ***(b) ***L *(*A*) = *C*2*A*

**(c) ***L *(*A*) = *A*2*C*

- Determine whether the following are linear transformations

from *P*2 to *P*3.

**(a) ***L *(*p*(*x*)) = *xp*(*x*)

**(b) ***L *(*p*(*x*)) = *x*2 + *p*(*x*)

**(c) ***L *(*p*(*x*)) = *p*(*x*) + *xp*(*x*) + *x*2*p*_(*x*)

- For each
*f*∈*C*[0, 1], define*L*(*f*) =*F*, where

*F*(*x*) =

_ *x*

0

*f *(*t*) *dt *0 ≤ *x *≤ 1

Show that *L *is a linear operator on *C*[0, 1] and then

find *L *(*e**x*) and *L *(*x*2).

- Determine whether the following are linear transformations

from *C *[0, 1] into R1:

**(a) ***L *(*f *) = *f *(0) **(b) ***L *(*f *) = |*f *(0)|

**(c) ***L *(*f *) = [ *f *(0) + *f *(1)]*/*2

**(d) ***L *(*f *) =

_ 1

0 [*f *(*x*)]2 *dx*

1*/*2

- Use mathematical induction to prove that if
*L*is a

linear transformation from *V *to *W*, then

*L *(*α*1**v**1 + *α*2**v**2 +· · ·+*α**n***v***n*)

= *α*1*L *(**v**1) + *α*2*L *(**v**2)+· · ·+*α**n**L *(**v***n*)

- Let {
**v**1,*. . .*,**v***n*} be a basis for a vector space*V*, and

let *L*1 and *L*2 be two linear transformations mapping

*V *into a vector space *W*. Show that if

*L*1(**v***i*) = *L*2(**v***i*)

for each *i *= 1, *. . . *, *n*, then *L*1 = *L*2 [i.e., show that

*L*1(**v**) = *L*2(**v**) for all **v **∈ *V*].

- Let
*L*be a linear operator on R1 and let*a*=*L*(1).

Show that *L *(*x*) = *ax *for all *x *∈ R1.

- Let
*L*be a linear operator on a vector space*V*.

Define *L**n*, *n *≥ 1, recursively by

*L*1 = *L*

*L**k*+1(**v**) = *L *(*L**k*(**v**)) for all **v **∈ *V*

Show that *L**n *is a linear operator on *V *for each

*n *≥ 1.

- Let
*L*1:*U*→*V*and*L*2 :*V*→*W*be linear transformations,

and let *L *= *L*2 ◦ *L*1 be the mapping

defined by

*L *(**u**) = *L*2(*L*1(**u**))

for each **u **∈ *U*. Show that *L *is a linear transformation

mapping *U *into *W*.

- Determine the kernel and range of each of the

following linear operators on R3:

**(a) ***L *(**x**) = (*x*3, *x*2, *x*1)*T ***(b) ***L *(**x**) = (*x*1, *x*2, 0)*T*

**(c) ***L *(**x**) = (*x*1, *x*1, *x*1)*T*

- Let
*S*be the subspace of R3 spanned by**e**1 and**e**2.

For each linear operator *L *in Exercise 17, find *L *(*S*).

- Find the kernel and range of each of the following

linear operators on *P*3:

**(a) ***L *(*p*(*x*)) = *xp*_(*x*) **(b) ***L *(*p*(*x*)) = *p*(*x*)−*p*_(*x*)

**(c) ***L *(*p*(*x*)) = *p*(0)*x *+ *p*(1)

- Let
*L*:*V*→*W*be a linear transformation, and

let *T *be a subspace of *W*. The *inverse image *of *T*,

denoted *L*−1(*T*), is defined by

*L*−1(*T*) = {**v **∈ *V*|*L *(**v**) ∈ *T*}

Show that *L*−1(*T*) is a subspace of *V*.

- A linear transformation
*L*:*V*→*W*is said to be

*one-to-one *if *L *(**v**1) = *L *(**v**2) implies that **v**1 = **v**2

(i.e., no two distinct vectors **v**1, **v**2 in *V *get mapped

into the same vector **w **∈ *W*). Show that *L *is

one-to-one if and only if ker(*L*) = {**0***V *}.

- A linear transformation
*L*:*V*→*W*is said to

map *V onto W *if *L *(*V*) = *W*. Show that the linear

transformation *L *defined by

*L *(**x**) = (*x*1, *x*1 + *x*2, *x*1 + *x*2 + *x*3)*T*

maps R3 onto R3.

- Which of the operators defined in Exercise 17 are

one-to-one? Which map R3 onto R3?

- Let
*A*be a 2 × 2 matrix, and let*L**A*be the linear

operator defined by

*L**A*(**x**) = *A***x**

Show that

**(a) ***L**A *maps R2 onto the column space of *A*.

**(b) **if *A *is nonsingular, then *L**A *maps R2 onto R2.

- Let
*D*be the differentiation operator on*P*3, and let

*S *= {*p *∈ *P*3 | *p*(0) = 0}

Show that

**(a) ***D *maps *P*3 onto the subspace *P*2, but

*D*: *P*3 → *P*2 is not one-to-one.

*D*:*S*→*P*3 is one-to-one but not onto.

** **

SECTION 4.2 EXERCISES

- Refer to Exercise 1 of Section 4.1. For each linear

transformation *L*, find the standard matrix representation

of *L*.

- For each of the following linear transformations

*L *mapping R3 into R2, find a matrix *A *such that

*L *(**x**) = *A***x **for every **x **in R3:

**(a) ***L *((*x*1, *x*2, *x*3)*T *) = (*x*1 + *x*2, 0)*T*

**(b) ***L *((*x*1, *x*2, *x*3)*T *) = (*x*1, *x*2)*T*

**(c) ***L *((*x*1, *x*2, *x*3)*T *) = (*x*2 − *x*1, *x*3 − *x*2)*T*

- For each of the following linear operators
*L*on R3,

find a matrix *A *such that *L *(**x**) = *A***x **for every **x **in

R3:

**(a) ***L *((*x*1, *x*2, *x*3)*T *) = (*x*3, *x*2, *x*1)*T*

**(b) ***L *((*x*1, *x*2, *x*3)*T *) = (*x*1, *x*1 + *x*2, *x*1 + *x*2 + *x*3)*T*

**(c) ***L *((*x*1, *x*2, *x*3)*T *) = (2*x*3, *x*2 + 3*x*1, 2*x*1 − *x*3)*T*

- Let
*L*be the linear operator on R3 defined by

*L *(**x**) =

⎧⎪⎪⎪⎪⎪⎩

2*x*1 − *x*2 − *x*3

2*x*2 − *x*1 − *x*3

2*x*3 − *x*1 − *x*2

⎫⎪⎪⎪⎪⎪⎭

Determine the standard matrix representation *A *of

*L*, and use *A *to find *L *(**x**) for each of the following

vectors **x**:

**(a) x **= (1, 1, 1)*T ***(b) x **= (2, 1, 1)*T*

**(c) x **= (−5, 3, 2)*T*

- Find the standard matrix representation for each of

the following linear operators:

**(a) ***L *is the linear operator that rotates each **x **in

R2 by 45◦ in the clockwise direction.

**(b) ***L *is the linear operator that reflects each vector

**x **in R2 about the *x*1 axis and then rotates it 90◦

in the counterclockwise direction.

**(c) ***L *doubles the length of **x **and then rotates it 30◦

in the counterclockwise direction.

**(d) ***L *reflects each vector **x **about the line *x*2 = *x*1

and then projects it onto the *x*1-axis.

- Let

**b**1 =

⎧⎪⎪⎪⎪⎪⎩

1

1

0

⎫⎪⎪⎪⎪⎪⎭

, **b**2 =

⎧⎪⎪⎪⎪⎪⎩

1

0

1

⎫⎪⎪⎪⎪⎪⎭

, **b**3 =

⎧⎪⎪⎪⎪⎪⎩

0

1

1

⎫⎪⎪⎪⎪⎪⎭

and let *L *be the linear transformation from R2 into

R3 defined by

*L *(**x**) = *x*1**b**1 + *x*2**b**2 + (*x*1 + *x*2)**b**3

Find the matrix *A *representing *L *with respect to the

ordered bases {**e**1, **e**2} and {**b**1, **b**2, **b**3}.

- Let

**y**1

=

⎧⎪⎪⎪⎪⎪⎩

1

1

1

⎫⎪⎪⎪⎪⎪⎭

, **y**2

=

⎧⎪⎪⎪⎪⎪⎩

1

1

0

⎫⎪⎪⎪⎪⎪⎭

, **y**3

=

⎧⎪⎪⎪⎪⎪⎩

1

0

0

⎫⎪⎪⎪⎪⎪⎭

and let *I *be the identity operator on R3.

**(a) **Find the coordinates of *I*(**e**1), *I*(**e**2), and *I*(**e**3)

with respect to {**y**1, **y**2, **y**3

}.

**(b) **Find a matrix *A *such that *A***x **is the coordinate

vector of **x **with respect to {**y**1, **y**2, **y**3

}.

- Let
**y**1,**y**2, and**y**3 be defined as in Exercise 7, and

let *L *be the linear operator on R3 defined by

*L *(*c*1**y**1

+ *c*2**y**2

+ *c*3**y**3)

= (*c*1 + *c*2 + *c*3)**y**1

+ (2*c*1 + *c*3)**y**2

− (2*c*2 + *c*3)**y**3

**(a) **Find a matrix representing *L *with respect to the

ordered basis {**y**1, **y**2, **y**3

}.

**(b) **For each of the following, write the vector **x **as

a linear combination of **y**1, **y**2, and **y**3 and use

the matrix from part (a) to determine *L *(**x**):

**(i) x **= (7, 5, 2)*T ***(ii) x **= (3, 2, 1)*T*

**(iii) x **= (1, 2, 3)*T*

- Let

*R *=

⎧⎪⎪⎪⎪⎪⎩

0 0 1 1 0

0 1 1 0 0

1 1 1 1 1

⎫⎪⎪⎪⎪⎪⎭

The column vectors of *R *represent the homogeneous

coordinates of points in the plane.

**(a) **Draw the figure whose vertices correspond to

the column vectors of *R*. What type of figure

is it?

**(b) **For each of the following choices of *A*, sketch

the graph of the figure represented by *AR *and

describe geometrically the effect of the linear

transformation:

**(i) ***A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

2 0 0

0 1

2 0

0 0 1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(ii) ***A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

√1

2

√1

2 0

−√1

2

√1

2 0

0 0 1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(iii) ***A *=

⎧⎪⎪⎪⎪⎪⎩

1 0 2

0 1 −3

0 0 1

⎫⎪⎪⎪⎪⎪⎭

- For each of the following linear operators on R2,

find the matrix representation of the transformation

with respect to the homogeneous coordinate

system:

**(a) **The transformation *L *that rotates each vector

by 120◦ in the counterclockwise direction

**(b) **The transformation *L *that translates each point

3 units to the left and 5 units up

**(c) **The transformation *L *that contracts each vector

by a factor of one-third

**(d) **The transformation that reflects a vector about

the *y*-axis and then translates it up 2 units

- Determine the matrix representation of each of the

following composite transformations.

**(a) **A yaw of 90◦, followed by a pitch of 90◦

**(b) **A pitch of 90◦, followed by a yaw of 90◦

**(c) **A pitch of 45◦, followed by a roll of −90◦

**(d) **A roll of −90◦, followed by a pitch of 45◦

**(e) **A yaw of 45◦, followed by a pitch of −90◦ and

then a roll of −45◦

**(f) **A roll of −45◦, followed by a pitch of −90◦

and then a yaw of 45◦

- Let
*Y*,*P*, and*R*be the yaw, pitch, and roll matrices

given in equations (1), (2), and (3), respectively,

and let *Q *= *YPR*.

**(a) **Show that *Y*, *P*, and *R *all have determinants

equal to 1.

**(b) **The matrix *Y *represents a yaw with angle *u*.

The inverse transformation should be a yaw

with angle −*u*. Show that the matrix representation

of the inverse transformation is *Y**T *and

that *Y**T *= *Y*−1.

**(c) **Show that *Q *is nonsingular and express *Q*−1 in

terms of the transposes of *Y*, *P*, and *R*.

- Let
*L*be the linear transformation mapping*P*2 into

R2 defined by

*L *(*p*(*x*)) =

⎧⎪⎪⎪⎪⎪⎩

_ 1

0

*p*(*x*) *dx*

*p*(0)

⎫⎪⎪⎪⎪⎪⎭

Find a matrix *A *such that

*L *(*α *+ *β**x*) = *A*

⎧⎪⎩

*α*

*β*

⎫⎪⎭

- The linear transformation
*L*defined by

*L *(*p*(*x*)) = *p*_(*x*) + *p*(0)

maps *P*3 into *P*2. Find the matrix representation of

*L *with respect to the ordered bases [*x*2, *x*, 1] and

[2, 1 − *x*]. For each of the following vectors *p*(*x*)

in *P*3, find the coordinates of *L *(*p*(*x*)) with respect

to the ordered basis [2, 1 − *x*]:

**(a) ***x*2 + 2*x *− 3 **(b) ***x*2 + 1

**(c) **3*x ***(d) **4*x*2 + 2*x*

- Let
*S*be the subspace of*C*[*a*,*b*] spanned by*e**x*,*xe**x*,

and *x*2*e**x*. Let *D *be the differentiation operator of

*S*. Find the matrix representing*D*with respect to

[*e**x*, *xe**x*, *x*2*e**x*].

- Let
*L*be a linear operator on R*n*. Suppose that

*L *(**x**) = **0 **for some **x **_= **0**. Let *A *be the matrix

representing *L *with respect to the standard basis

{**e**1, **e**2, *. . . *, **e***n*}. Show that *A *is singular.

- Let
*L*be a linear operator on a vector space *V*. Let*A*be the matrix representing*L*with respect

to an ordered basis {**v**1, *. . . *, **v***n*} of *V *[i.e.,

*L *(**v***j*) =

*n*

*i*=1

*a**ij***v***i*, *j *= 1, *. . . *, *n*]. Show that *A**m *is the

matrix representing *L**m *with respect to {**v**1, *. . . *, **v***n*}.

- Let
*E*= {**u**1,**u**2,**u**3} and*F*= {**b**1,**b**2}, where

**u**1 =

⎧⎪⎪⎪⎪⎪⎩

1

0

−1

⎫⎪⎪⎪⎪⎪⎭

, **u**2 =

⎧⎪⎪⎪⎪⎪⎩

1

2

1

⎫⎪⎪⎪⎪⎪⎭

, **u**3 =

⎧⎪⎪⎪⎪⎪⎩

−1

1

1

⎫⎪⎪⎪⎪⎪⎭

and

**b**1 = (1,−1)*T *, **b**2 = (2,−1)*T*

For each of the following linear transformations *L*

from R3 into R2, find the matrix representing *L *with

respect to the ordered bases *E *and *F*:

**(a) ***L *(**x**) = (*x*3, *x*1)*T*

**(b) ***L *(**x**) = (*x*1 + *x*2, *x*1 − *x*3)*T*

**(c) ***L *(**x**) = (2*x*2,−*x*1)*T*

- Suppose that
*L*1 :*V*→*W*and*L*2 :*W*→*Z*are

linear transformations and *E*, *F*, and *G *are ordered

bases for *V*, *W*, and *Z*, respectively. Show that, if *A*

represents *L*1 relative to *E *and *F *and *B *represents

*L*2 relative to *F *and *G*, then the matrix *C *= *BA *represents

*L*2 ◦ *L*1: *V *→ *Z *relative to *E *and *G*. *Hint*:

Show that *BA*[**v**]*E *= [(*L*2 ◦ *L*1)(**v**)]*G *for all **v **∈ *V*.

- Let
*V*and*W*be vector spaces with ordered bases

*E *and *F*, respectively. If *L *: *V *→ *W *is a linear

transformation and *A *is the matrix representing *L*

relative to *E *and *F*, show that

**(a) v **∈ ker(*L*) if and only if [**v**]*E *∈ *N*(*A*).

**(b) w **∈ *L *(*V*) if and only if [**w**]*F *is in the column

space of *A*.

SECTION 4.3 EXERCISES

- For each of the following linear operators
*L*on

R2, determine the matrix *A *representing *L *with

respect to {**e**1, **e**2} (see Exercise 1 of Section 1.2)

and the matrix *B *representing *L *with respect to

{**u**1 = (1, 1)*T *, **u**2 = (−1, 1)*T *}:

**(a) ***L *(**x**) = (−*x*1, *x*2)*T ***(b) ***L *(**x**) = −**x**

**(c) ***L *(**x**) = (*x*2, *x*1)*T ***(d) ***L *(**x**) = 1

2 **x**

**(e) ***L *(**x**) = *x*2**e**2

- Let {
**u**1,**u**2} and {**v**1,**v**2} be ordered bases for R2,

where

**u**1 =

⎧⎪⎩

1

1

⎫⎪⎭

, **u**2 =

⎧⎪⎩

−1

1

⎫⎪⎭

and

**v**1 =

⎧⎪⎩

2

1

⎫⎪⎭

, **v**2 =

⎧⎪⎩

1

0

⎫⎪⎭

Let *L *be the linear transformation defined by

*L *(**x**) = (−*x*1, *x*2)*T*

and let *B *be the matrix representing *L *with respect

to {**u**1, **u**2} [from Exercise 1(a)].

**(a) **Find the transition matrix *S *corresponding to

the change of basis from {**u**1, **u**2} to {**v**1, **v**2}.

**(b) **Find the matrix *A *representing *L *with respect

to {**v**1, **v**2} by computing *SBS*−1.

**(c) **Verify that

*L *(**v**1) = *a*11**v**1 + *a*21**v**2

*L *(**v**2) = *a*12**v**1 + *a*22**v**2

- Let
*L*be the linear transformation on R3 defined by

*L *(**x**) =

⎧⎪⎪⎪⎪⎪⎩

2*x*1 − *x*2 − *x*3

2*x*2 − *x*1 − *x*3

2*x*3 − *x*1 − *x*2

⎫⎪⎪⎪⎪⎪⎭

and let *A *be the standard matrix representation of *L*

(see Exercise 4 of Section 4.2). If **u**1 = (1, 1, 0)*T *,

**u**2 = (1, 0, 1)*T*, and **u**3 = (0, 1, 1)*T *, then {**u**1, **u**2, **u**3}

is an ordered basis for R3 and *U *= (**u**1, **u**2, **u**3)

is the transition matrix corresponding to a change

of basis from {**u**1, **u**2, **u**3} to the standard basis

{**e**1, **e**2, **e**3}. Determine the matrix *B *representing *L*

with respect to the basis {**u**1, **u**2, **u**3} by calculating

*U*−1*AU*.

- Let
*L*be the linear operator mapping R3 into R3

defined by *L *(**x**) = *A***x**, where

*A *=

⎧⎪⎪⎪⎪⎪⎩

3 −1 −2

2 0 −2

2 −1 −1

⎫⎪⎪⎪⎪⎪⎭

and let

**v**1 =

⎧⎪⎪⎪⎪⎪⎩

1

1

1

⎫⎪⎪⎪⎪⎪⎭

, **v**2 =

⎧⎪⎪⎪⎪⎪⎩

1

2

0

⎫⎪⎪⎪⎪⎪⎭

, **v**3 =

⎧⎪⎪⎪⎪⎪⎩

0

−2

1

⎫⎪⎪⎪⎪⎪⎭

Find the transition matrix *V *corresponding to a

change of basis from {**v**1, **v**2, **v**3} to {**e**1, **e**2, **e**3}, and

use it to determine the matrix *B *representing *L *with

respect to {**v**1, **v**2, **v**3}.

- Let
*L*be the operator on*P*3 defined by

*L *(*p*(*x*)) = *xp*_(*x*) + *p*__(*x*)

**(a) **Find the matrix *A *representing *L *with respect

to [1, *x*, *x*2].

**(b) **Find the matrix *B *representing *L *with respect

to [1, *x*, 1 + *x*2].

**(c) **Find the matrix *S *such that *B *= *S*−1*AS*.

**(d) **If *p*(*x*) = *a*0 + *a*1*x *+ *a*2(1 + *x*2), calculate

*L**n*(*p*(*x*)).

- Let
*V*be the subspace of*C*[*a*,*b*] spanned by

1, *e**x*, *e*−*x*, and let *D *be the differentiation operator

on *V*.

**(a) **Find the transition matrix *S *representing

the change of coordinates from the

ordered basis [1, *e**x*, *e*−*x*] to the ordered basis

[1, cosh *x*, sinh*x*]. [cosh *x *= 1

2 (*e**x *+ *e*−*x*),

sinh *x *= 1

2 (*e**x *− *e*−*x*).]

**(b) **Find the matrix *A *representing *D *with respect

to the ordered basis [1, cosh *x*, sinh*x*].

**(c) **Find the matrix *B *representing *D *with respect

to [1, *e**x*, *e*−*x*].

**(d) **Verify that *B *= *S*−1*AS*.

- Prove that if
*A*is similar to*B*and*B*is similar to*C*,

then *A *is similar to *C*.

- Suppose that
*A*=*S**_**S*−1, where*_*is a diagonal

matrix with diagonal elements *λ*1, *λ*2, *. . . *, *λ**n*.

**(a) **Show that *A***s***i *= *λ**i***s***i*, *i *= 1, *. . . *, *n*.

**(b) **Show that if **x **= *α*1**s**1+*α*2**s**2+· · ·+*α**n***s***n*, then

*A**k***x **= *α*1*λ**k*

1**s**1 + *α*2*λ**k*

2**s**2 +· · ·+*α**n**λ**k*

*n***s***n*

**(c) **Suppose that |*λ**i*| *< *1 for *i *= 1, *. . . *, *n*. What

happens to *A**k***x **as *k*→∞? Explain.

- Suppose that
*A*=*ST*, where*S*is nonsingular. Let

*B *= *TS*. Show that *B *is similar to *A*.

- Let
*A*and*B*be*n*×*n*matrices. Show that if*A*is

similar to *B *then there exist *n*×*n *matrices *S *and *T*,

with *S *nonsingular, such that

*A *= *ST *and *B *= *TS*

- Show that if
*A*and*B*are similar matrices, then

det(*A*) = det(*B*).

- Let
*A*and*B*be similar matrices. Show that

**(a) ***A**T *and *B**T *are similar.

**(b) ***A**k *and *B**k *are similar for each positive integer

*k*.

- Show that if
*A*is similar to*B*and*A*is nonsingular,

then *B *must also be nonsingular and *A*−1 and *B*−1

are similar.

- Let
*A*and*B*be similar matrices and let*λ*be any

scalar. Show that

**(a) ***A *− *λ**I *and *B *− *λ**I *are similar.

**(b) **det(*A *− *λ**I*) = det(*B *− *λ**I*).

- The
*trace*of an*n*×*n*matrix*A*, denoted tr(*A*), is the

sum of its diagonal entries; that is,

tr(*A*) = *a*11 + *a*22 +· · ·+*a**nn*

Show that

**(a) **tr(*AB*) = tr(*BA*)

**(b) **if *A *is similar to *B*, then tr(*A*) = tr(*B*).

**MATLAB EXERCISES**

- Use MATLAB to generate a matrix
*W*and a vector

**x **by setting

*W *= **triu**(**ones**(5)) and **x **= [1 : 5]_

The columns of *W *can be used to form an ordered

basis

*F *= {**w**1,**w**2,**w**3,**w**4,**w**5}

Let *L*: R5 → R5 be a linear operator such that

*L *(**w**1) = **w**2, *L *(**w**2) = **w**3, *L *(**w**3) = **w**4

and

*L *(**w**4) = 4**w**1 + 3**w**2 + 2**w**3 + **w**4

*L *(**w**5) = **w**1 + **w**2 + **w**3 + 3**w**4 + **w**5

**(a) **Determine the matrix *A *representing *L *with

respect to *F*, and enter it in MATLAB.

**(b) **Use MATLAB to compute the coordinate vector

**y **= *W*−1**x **of **x **with respect to *F*.

**(c) **Use *A *to compute the coordinate vector **z **of

*L *(**x**) with respect to *F*.

**(d) ***W *is the transition matrix from *F *to the standard

basis for R5. Use *W *to compute the coordinate

vector of *L *(**x**) with respect to the standard basis.

- Set
*A*=**triu**(**ones**(5))∗**tril**(**ones**(5)). If*L*denotes

the linear operator defined by *L *(**x**) = *A***x **for

all **x **in R*n*, then *A *is the matrix representing *L *with

respect to the standard basis for R5. Construct a 5×5

matrix *U *by setting

*U *= **hankel**(**ones**(5, 1), 1 : 5)

Use the MATLAB function **rank **to verify that the

column vectors of *U *are linearly independent. Thus,

*E *= {**u**1, **u**2, **u**3, **u**4, **u**5} is an ordered basis for R5.

The matrix *U *is the transition matrix from *E *to the

standard basis.

**(a) **Use MATLAB to compute the matrix *B *representing

*L *with respect to *E*. (The matrix *B *should

be computed in terms of *A*, *U*, and *U*−1.)

**(b) **Generate another matrix by setting

*V *= **toeplitz**([1, 0, 1, 1, 1])

Use MATLAB to check that *V *is nonsingular. It

follows that the column vectors of *V *are linearly

independent and hence form an ordered basis *F*

for R5. Use MATLAB to compute the matrix

*C*, which represents *L *with respect to *F*. (The

matrix *C *should be computed in terms of *A*, *V*,

and *V*−1.)

**(c) **The matrices *B *and *C *from parts (a) and (b)

should be similar. Why? Explain. Use MATLAB

to compute the transition matrix *S *from

*F *to *E*. Compute the matrix *C *in terms of *B*,

*S*, and *S*−1. Compare your result with the result

from part (b).

- Let

*A *= **toeplitz**(1 : 7),

*S *= **compan**(**ones**(8, 1))

and set *B *= *S*−1 ∗ *A *∗ *S*. The matrices *A *and *B *are

similar. Use MATLAB to verify that the following

properties hold for these two matrices:

**(a) **det(*B*) = det(*A*)

**(b) ***B**T *= *S**T**A**T *(*S**T *)−1

**(c) ***B*−1 = *S*−1*A*−1*S*

**(d) ***B*9 = *S*−1*A*9*S*

**(e) ***B *− 3*I *= *S*−1(*A *− 3*I*)*S*

**(f) **det(*B *− 3*I*) = det(*A *− 3*I*)

**(g) **tr(*B*) = tr(*A*) (Note that the trace of a matrix *A*

can be computed with the MATLAB command

**trace**.)

These properties will hold in general for any pair

of similar matrices. (see Exercises 11-15 of Section

4.3).

**CHAPTER TEST A ****True or False**

*For each statement that follows, answer *true *if the statement*

*is always true and *false *otherwise. In the case of*

*a true statement, explain or prove your answer. In the*

*case of a false statement, give an example to show that*

*the statement is not always true.*

- Let
*L*: R*n*→ R*n*be a linear transformation. If

*L *(**x**1) = *L *(**x**2), then the vectors **x**1 and **x**2 must

be equal.

- If
*L*1 and*L*2 are both linear operators on a vector

space *V*, then *L*1+*L*2 is also a linear operator on *V*,

where *L*1 + *L*2 is the mapping defined by

(*L*1 + *L*2)(**v**) = *L*1(**v**) + *L*2(**v**) for all **v **∈ *V*

- If
*L*:*V*→*V*is a linear transformation and**x**∈

ker(*L*), then *L *(**v **+ **x**) = *L *(**v**) for all **v **∈ *V*.

- If
*L*1 rotates each vector**x**in R2 by 60◦ and then

reflects the resulting vector about the *x*-axis, and if

*L*2 is a transformation that does the same two operations,

but in the reverse order, then *L*1 = *L*2.

- The set of all vectors
**x**used in the homogeneous

coordinate system (see the application on computer

graphics and animation in Section 4.2 ) forms a

subspace of R3.

- Let
*L*: R2 → R2 be a linear transformation, and let

*A *be the standard matrix representation of *L*. If *L*2

is defined by

*L*2(**x**) = *L *(*L *(**x**)) for all **x **∈ R2

then *L*2 is a linear transformation and its standard

matrix representation is *A*2.

- Let
*E*= {**x**1,**x**2,*. . .*,**x***n*} be an ordered basis for R*n*.

If *L*1 : R*n *→ R*n *and *L*2 : R*n *→ R*n *have the

same matrix representation with respect to *E*, then

*L*1 = *L*2.

- Let
*L*: R*n*→ R*n*be a linear transformation. If*A*

is the standard matrix representation of *L*, then an

*n *× *n *matrix *B *will also be a matrix representation

of *L *if and only if *B *is similar to *A*.

- Let
*A*,*B*, and*C*be*n*×*n*matrices. If*A*is similar to

*B *and *B *is similar to *C*, then *A *is similar to *C*.

- Any two matrices with the same trace are similar.

[This statement is the converse of part (b) of

Exercise 15 in Section 4.3.]

**CHAPTER TEST B**

- Determine whether the following are linear operators

on R2:

**(a) ***L *is the operator defined by

*L *(**x**) = (*x*1 + *x*2, *x*1)*T *.

**(b) ***L *is the operator defined by *L *(**x**) = (*x*1*x*2, *x*1)*T *.

- Let
*L*be a linear operator on R2 and let

**v**1 =

⎧⎪⎩

1

1

⎫⎪⎭

, **v**2 =

⎧⎪⎩

−1

2

⎫⎪⎭

, **v**3 =

⎧⎪⎩

1

7

⎫⎪⎭

If

*L *(**v**1) =

⎧⎪⎩

2

5

⎫⎪⎭

and *L *(**v**2) =

⎧⎪⎩

−3

1

⎫⎪⎭

find the value of *L *(**v**3).

- Let
*L*be the linear operator on R3 defined by

*L *(**x**) =

⎧⎪⎪⎪⎪⎪⎩

*x*2 − *x*1

*x*3 − *x*2

*x*3 − *x*1

⎫⎪⎪⎪⎪⎪⎭

and let *S *= Span((1, 0, 1)*T *).

**(a) **Find the kernel of *L*.

**(b) **Determine *L *(*S*).

- Let
*L*be the linear operator on R3 defined by

*L *(**x**) =

⎧⎪⎪⎪⎪⎪⎩

*x*2

*x*1

*x*1 + *x*2

⎫⎪⎪⎪⎪⎪⎭ Determine the range of *L*.

- Let
*L*: R2 → R3 be defined by

*L *(**x**) =

⎧⎪⎪⎪⎪⎪⎩

*x*1 + *x*2

*x*1 − *x*2

3*x*1 + 2*x*2

⎫⎪⎪⎪⎪⎪⎭

Find a matrix *A *such that *L *(**x**) = *A***x **for each **x **in

R2.

- Let
*L*be the linear operator on R2 that rotates a vector

by 30◦ in the counterclockwise direction and

then reflects the resulting vector about the *y*-axis.

Find the standard matrix representation of *L*.

- Let
*L*be the translation operator on R2 defined by

*L *(**x**) = **x **+ **a**, where**a **=

⎧⎪⎩

2

5

⎫⎪⎭ Find the matrix representation of *L*

with respect to

the homogeneous coordinate system.

- Let

**u**1 =

⎧⎪⎩

3

1

⎫⎪⎭

, **u**2 =

⎧⎪⎩

5

2

⎫⎪⎭

and let *L *be the linear operator that rotates vectors

in R2 by 45◦ in the counterclockwise direction.

Find the matrix representation of *L *with respect to

the ordered basis [**u**1, **u**2].

- Let

**u**1 =

⎧⎪⎩

3

1

⎫⎪⎭

, **u**2 =

⎧⎪⎩

5

2

⎫⎪⎭

and

**v**1 =

⎧⎪⎩

1

−2

⎫⎪⎭

, **v**2 =

⎧⎪⎩

1

−1

⎫⎪⎭

and let *L *be a linear operator on R2 whose matrix

representation with respect to the ordered basis is

{**u**1, **u**2} is

*A *=

⎧⎪⎩

2 1

3 2

⎫⎪⎭

**(a) **Determine the transition matrix from the basis

{**v**1, **v**2} to the basis {**u**1, **u**2}.

**(b) **Find the matrix representation of *L *with respect

to {**v**1, **v**2}.

- Let
*A*and*B*be similar matrices.

**(a) **Show that det(*A*) = det(*B*).

**(b) **Show that if *λ *is any scalar, then det(*A*−*λ**I*) =

det(*B *− *λ**I*).

** **

**Chapter 5 ****Orthogonality**

SECTION 5.1 EXERCISES

- Find the angle between the vectors
**v**and**w**in each

of the following:

**(a) v **= (2, 1, 3)*T *, **w **= (6, 3, 9)*T*

**(b) v **= (2,−3)*T *, **w **= (3, 2)*T*

**(c) v **= (4, 1)*T *, **w **= (3, 2)*T*

**(d) v **= (−2, 3, 1)*T *, **w **= (1, 2, 4)*T*

- For each pair of vectors in Exercise 1, find the

scalar projection of **v **onto **w**. Also find the vector

projection of **v **onto **w**.

- For each of the following pairs of vectors
**x**and**y**,

find the vector projection **p **of **x **onto **y **and verify

that **p **and **x **− **p **are orthogonal:

**(a) x **= (3, 4)*T *, **y **= (1, 0)*T*

**(b) x **= (3, 5)*T *, **y **= (1, 1)*T*

**(c) x **= (2, 4, 3)*T *, **y **= (1, 1, 1)*T*

**(d) x **= (2,−5, 4)*T *, **y **= (1, 2,−1)*T*

- Let
**x**and**y**be linearly independent vectors in R2.

If _**x**_ = 2 and _**y**_ = 3, what, if anything, can we

conclude about the possible values of |**x***T***y**|?

- Find the point on the line
*y*= 2*x*that is closest to

the point (5, 2).

- Find the point on the line
*y*= 2*x*+ 1 that is closest

to the point (5, 2).

- Find the distance from the point (1, 2) to the line

4*x *− 3*y *= 0.

- In each of the following, find the equation of the

plane normal to the given vector **N **and passing

through the point *P*0:

**(a) N **= (2, 4, 3)*T *, *P*0 = (0, 0, 0)

**(b) N **= (−3, 6, 2)*T *, *P*0 = (4, 2,−5)

**(c) N **= (0, 0, 1)*T *, *P*0 = (3, 2, 4)

- Find the equation of the plane that passes through

the points

*P*1 = (2, 3, 1), *P*2 = (5, 4, 3), *P*3 = (3, 4, 4)

- Find the distance from the point (1, 1, 1) to the

plane 2*x *+ 2*y *+ *z *= 0.

- Find the distance from the point (2, 1,−2) to the

plane

6(*x *− 1) + 2(*y *− 3) + 3(*z *+ 4) = 0

- If
**x**= (*x*1,*x*2)*T*,**y**= (*y*1,*y*2)*T*, and**z**= (*z*1,*z*2)*T*are

arbitrary vectors in R2, prove that

**(a) x***T***x **≥ 0 **(b) x***T***y **= **y***T***x**

**(c) x***T *(**y **+ **z**) = **x***T***y **+ **x***T***z**

- Show that if
**u**and**v**are any vectors in R2, then

_**u**+**v**_2 ≤ (_**u**_+_**v**_)2 and hence _**u**+**v**_ ≤ _**u**_+

_**v**_. When does equality hold? Give a geometric

interpretation of the inequality.

- Let
**x**1,**x**2, and**x**3 be vectors in R3. If**x**1 ⊥**x**2 and

**x**2 ⊥ **x**3, is it necessarily true that **x**1 ⊥ **x**3? Prove

your answer.

- Let
*A*be a 2 × 2 matrix with linearly independent

column vectors **a**1 and **a**2. If **a**1 and **a**2 are used

to form a parallelogram *P *with altitude *h *(see the

figure), show that

**(a) ***h*2_**a**2_2 = _**a**1_2_**a**2_2 − (**a***T*1

**a**2)2

**(b) **Area of *P *= |det(*A*)|

*h*

**a**1 **a**1

**a**2

**a**2 á

- If
**x**and**y**are linearly independent vectors in R3,

then they can be used to form a parallelogram *P*

in the plane through the origin corresponding to

Span(**x**, **y**). Show that

Area of *P *= _**x **× **y**_

- Let

**x **=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

4

4

−4

4

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭and **y**

=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

4

2

2

1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Determine the angle between **x **and **y**.

**(b) **Determine the distance between **x **and **y**.

- Let
**x**and**y**be vectors in R*n*and define

**p **= **x***T***y**

**y***T***y**

**y **and **z **= **x **− **p**

**(a) **Show that **p **⊥ **z**. Thus, **p **is the *vector projection*

of **x **onto **y**; that is, **x **= **p **+ **z**, where **p**

and **z **are orthogonal components of **x**, and **p **is

a scalar multiple of **y**.

**(b) **If _**p**_ = 6 and _**z**_ = 8, determine the value

of _**x**_.

- Use the database matrix
*U*from Application 1 and

search for the key words *orthogonality, spaces, vector*,

only this time give the key word *orthogonality*

twice the weight of the other two key search vector

words. Which of the eight modules best matches

the search criteria? [*Hint*: Form the search vector

using the weights 2, 1, 1 in the rows corresponding

to the search words and then scale the vector to

make it a unit vector.]

- Five students in an elementary school take aptitude

tests in English, mathematics, and science. Their

scores are given in the following table. Determine

the correlation matrix and describe how the three

sets of scores are correlated.

Scores

**Student English Mathematics Science**

S1 61 53 53

S2 63 73 78

S3 78 61 82

S4 65 84 96

S5 63 59 71

Average 66 66 76

- Let
*t*be a fixed real number and let

*c *= cos *t*, *s *= sin *t*,

**x **= (*c*, *cs*, *cs*2, *. . . *, *cs**n*−1, *s**n*)*T*

Show that **x **is a unit vector in R*n*+1.

*Hint:*

1 + *s*2 + *s*4 +· · ·+*s*2*n*−2 = 1 − *s*2*n*

1 − *s*2

SECTION 5.2 EXERCISES

- For each of the following matrices, determine a

basis for each of the subspaces *R*(*A**T *), *N*(*A*), *R*(*A*),

and *N*(*A**T *):

**(a) ***A *=

⎧⎪⎩

3 4

6 8

⎫⎪⎭

**(b) ***A *=

⎧⎪⎩

1 3 1

2 4 0

⎫⎪⎭

**(c) ***A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

4 −2

1 3

2 1

3 4

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(d) ***A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 0 0 0

0 1 1 1

0 0 1 1

1 1 2 2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

- Let
*S*be the subspace of R3 spanned by**x**=

(1,−1, 1)*T *.

**(a) **Find a basis for *S*⊥.

**(b) **Give a geometrical description of *S *and *S*⊥.

**(a)**Let*S*be the subspace of R3 spanned by the

vectors **x **= (*x*1, *x*2, *x*3)*T *and **y **= (*y*1, *y*2, *y*3)*T *.

Let

*A *=

⎧⎪⎩

*x*1 *x*2 *x*3

*y*1 *y*2 *y*3

⎫⎪⎭

Show that *S*⊥ = *N*(*A*).

**(b) **Find the orthogonal complement of the

subspace of R3 spanned by (1, 2, 1)*T *and

(1,−1, 2)*T *.

- Let
*S*be the subspace of R4 spanned by**x**1 =

(1, 0,−2, 1)*T *and **x**2 = (0, 1, 3,−2)*T *. Find a basis

for *S*⊥.

- Let
*A*be a 3 × 2 matrix with rank 2. Give geometric

descriptions of *R*(*A*) and *N*(*A**T *), and describe

geometrically how the subspaces are related.

- Is it possible for a matrix to have the vector (3, 1, 2)

in its row space and (2, 1, 1)*T *in its null space?

Explain.

- Let
**a***j*be a nonzero column vector of an*m*×*n*matrix *A*. Is it possible for**a***j*to be in*N*(*A**T*)? Explain.- Let
*S*be the subspace of R*n*spanned by the vectors

**x**1, **x**2, *. . . *, **x***k*. Show that **y **∈ *S*⊥ if and only if

**y **⊥ **x***i *for *i *= 1, *. . . *, *k*.

- If
*A*is an*m*×*n*matrix of rank*r*, what are the

dimensions of *N*(*A*) and *N*(*A**T *)? Explain.

- Prove Corollary 5.2.5.
- Prove: If
*A*is an*m*×*n*matrix and**x**∈ R*n*, then

either *A***x **= **0 **or there exists **y **∈ *R*(*A**T *) such that

**x***T***y ** = 0. Draw a picture similar to Figure 5.2.2

to illustrate this result geometrically for the case

where *N*(*A*) is a two-dimensional subspace of R3.

- Let
*A*be an*m*×*n*matrix. Explain why the following

are true.

**(a) **Any vector **x **in R*n *can be uniquely written as

a sum**y **+ **z**, where **y **∈ *N*(*A*) and **z **∈ *R*(*A**T *).

**(b) **Any vector **b **∈ R*m *can be uniquely written as

a sum**u **+ **v**, where **u **∈ *N*(*A**T*) and **v **∈ *R*(*A*).

- Let
*A*be an*m*×*n*matrix. Show that

**(a) **if **x **∈ *N*(*A**T**A*), then *A***x **is in both *R*(*A*) and

*N*(*A**T *).

**(b) ***N*(*A**T**A*) = *N*(*A*).

**(c) ***A *and *A**T**A *have the same rank.

**(d) **if *A *has linearly independent columns, then

*A**T**A *is nonsingular.

- Let
*A*be an*m*×*n*matrix,*B*an*n*×*r*matrix, and

*C *= *AB*. Show that

**(a) ***N*(*B*) is a subspace of *N*(*C*).

**(b) ***N*(*C*)⊥ is a subspace of *N*(*B*)⊥ and, consequently,

*R*(*C**T *) is a subspace of *R*(*B**T *).

- Let
*U*and*V*be subspaces of a vector space*W*.

Show that if *W *= *U *⊕ *V*, then *U *∩ *V *= {**0**}.

- Let
*A*be an*m*×*n*matrix of rank*r*and let

{**x**1, *. . . *, **x***r*} be a basis for *R*(*A**T *). Show that

{*A***x**1, *. . . *, *A***x***r*} is a basis for *R*(*A*).

- Let
**x**and**y**be linearly independent vectors in R*n*

and let *S *= Span(**x**, **y**).We can use **x **and **y **to define

a matrix *A *by setting

*A *= **xy***T *+ **yx***T*

**(a) **Show that *A *is symmetric.

**(b) **Show that *N*(*A*) = *S*⊥.

**(c) **Show that the rank of *A *must be 2.

SECTION 5.3 EXERCISES

- Find the least squares solution of each of the following

systems:

**(a) ***x*1 + *x*2 = 3

2*x*1 − 3*x*2 = 1

0*x*1 + 0*x*2 = 2

**(b) **−*x*1 + *x*2 = 10

2*x*1 + *x*2 = 5

*x*1 − 2*x*2 = 20

**(c) ***x*1 + *x*2 + *x*3 = 4

−*x*1 + *x*2 + *x*3 = 0

− *x*2 + *x*3 = 1

*x*1 + *x*3 = 2

- For each of your solutions
**ˆx**in Exercise 1:

**(a) **determine the projection **p **= *A*ˆ**x**.

**(b) **calculate the residual *r*(ˆ**x**).

**(c) **verify that *r*(ˆ**x**) ∈ *N*(*A**T *).

- For each of the following systems
*A***x**=**b**, find all

least squares solutions:

**(a) ***A *=

⎧⎪⎪⎪⎪⎪⎩

1 2

2 4

−1 −2

⎫⎪⎪⎪⎪⎪⎭ ,

**b **=

⎧⎪⎪⎪⎪⎪⎩ 3

2

1

⎫⎪⎪⎪⎪⎪⎭

**(b) ***A *=

⎧⎪⎪⎪⎪⎪⎩

1 1 3

−1 3 1

1 2 4

⎫⎪⎪⎪⎪⎪⎭ ,

**b **=

⎧⎪⎪⎪⎪⎪⎩

−2

0

8

⎫⎪⎪⎪⎪⎪⎭

- For each of the systems in Exercise 3, determine the

projection **p **of **b **onto *R*(*A*) and verify that **b**−**p **is

orthogonal to each of the column vectors of *A*.

**(a)**Find the best least squares fit by a linear

function to the data

*x *−1 0 1 2

*y *0 1 3 9

**(b) **Plot your linear function from part (a) along

with the data on a coordinate system.

- Find the best least squares fit to the data in Exercise

5 by a quadratic polynomial. Plot the points

*x *= −1, 0, 1, 2 for your function and sketch the

graph.

- Given a collection of points (
*x*1,*y*1), (*x*2,*y*2),*. . .*,

(*x**n*, *y**n*), let

**x **= (*x*1, *x*2, *. . . *, *x**n*)*T ***y **= (*y*1, *y*2, *. . . *, *y**n*)*T*

*x *= 1

*n*

*n*

*i*=1

*x**i **y *= 1

*n*

*n*

*i*=1

*y**i*

and let *y *= *c*0+*c*1*x *be the linear function that gives

the best least squares fit to the points. Show that if

*x *= 0, then

*c*0 = *y *and *c*1 = **x***T***y**

**x***T***x**

- The point (
*x*,*y*) is the*center of mass*for the collection

of points in Exercise 7. Show that the least

squares line must pass through the center of mass.

[*Hint*: Use a change of variables *z *= *x *− *x *to

translate the problem so that the new independent

variable has mean 0.]

- Let
*A*be an*m*×*n*matrix of rank*n*and let*P*=

*A*(*A**T**A*)−1*A**T *.

**(a) **Show that *P***b **= **b **for every **b **∈ *R*(*A*). Explain

this property in terms of projections.

**(b) **If **b **∈ *R*(*A*)⊥, show that *P***b **= **0**.

**(c) **Give a geometric illustration of parts (a)

and (b) if *R*(*A*) is a plane through the origin

in R3.

- Let
*A*be an 8 × 5 matrix of rank 3, and let**b**be a

nonzero vector in *N*(*A**T *).

**(a) **Show that the system *A***x **= **b **must be inconsistent.

**(b) **How many least squares solutions will the

system *A***x **= **b **have? Explain.

- Let
*P*=*A*(*A**T**A*)−1*A**T*, where*A*is an*m*×*n*matrix

of rank *n*.

**(a) **Show that *P*2 = *P*.

**(b) **Prove that *P**k *= *P *for *k *= 1, 2, *. . . *.

**(c) **Show that *P *is symmetric. [*Hint*: If *B *is nonsingular,

then (*B*−1)*T *= (*B**T *)−1.]

- Show that if⎧⎪⎩

*A I*

*O A**T*

⎫⎪⎭

⎧⎪⎩

ˆ**x**

**r**

⎫⎪⎭

=

⎧⎪⎩

**b**

**0**

⎫⎪⎭

then **ˆx **is a least squares solution of the system

*A***x **= **b **and **r **is the residual vector.

- Let
*A*∈ R*m*×*n*and let**ˆx**be a solution of the least

squares problem *A***x **= **b**. Show that a vector **y **∈

R*n *will also be a solution if and only if **y **= ˆ**x **+ **z**,

for some vector **z **∈ *N*(*A*). [*Hint*: *N*(*A**T**A*) = *N*(*A*).]

- Find the equation of the circle that gives the best

least squares circle fit to the points (−1,−2),

(0, 2.4), (1.1,−4), and (2.4,−1.6).

- Suppose that in the search procedure described

in Example 4, the search committee made the

following judgments in evaluating the teaching

credentials of the candidates:

**(i) **Gauss and Taussky have equal teaching

credentials.

**(ii) **O’Leary’s teaching credentials should be

given 1.25 times the weight of Ipsen’s credentials

and 1.75 times the weight given to

the credentials of both Gauss and Taussky.

**(iii) **Ipsen’s teaching credentials should be

given 1.25 times the weight given to the

credentials of both Gauss and Taussky.

**(a) **Use the method given in Application 4 to determine

a weight vector for rating the teaching

credentials of the candidates.

**(b) **Use the weight vector from part (a) to obtain

overall ratings of the candidates.

SECTION 5.4 EXERCISES

- Let
**x**= (−1,−1, 1, 1)*T*and**y**= (1, 1, 5,−3)*T*.

Show that **x **⊥ **y**. Calculate _**x**_2, _**y**_2, _**x **+ **y**_2

and verify that the Pythagorean law holds.

- Let
**x**= (1, 1, 1, 1)*T*and**y**= (8, 2, 2, 0)*T*.

**(a) **Determine the angle *θ *between **x **and **y**.

**(b) **Find the vector projection **p **of **x **onto **y**.

**(c) **Verify that **x **− **p **is orthogonal to **p**.

**(d) **Compute _**x **− **p**_2, _**p**_2, _**x**_2 and verify that

the Pythagorean law is satisfied.

- Use equation (1) with weight vector
**w**=

_ 1

4 , 1

2 , 1

4

_*T *to define an inner product for R3, and

let **x **= (1, 1, 1)*T *and **y **= (−5, 1, 3)*T *.

**(a) **Show that **x **and **y **are orthogonal with respect

to this inner product.

**(b) **Compute the values of _**x**_ and _**y**_ with respect

to this inner product.

- Given

*A *=

⎧⎪⎪⎪⎪⎪⎩

1 2 2

1 0 2

3 1 1

⎫⎪⎪⎪⎪⎪⎭

and *B *=

⎧⎪⎪⎪⎪⎪⎩

−4 1 1

−3 3 2

1 −2 −2

⎫⎪⎪⎪⎪⎪⎭

determine the value of each of the following.

**(a) **_*A*, *B*_ **(b) **_*A*_*F*

**(c) **_*B*_*F ***(d) **_*A *+ *B*_*F*

- Show that equation (2) defines an inner product on

R*m*×*n*.

- Show that the inner product defined by equation (3)

satisfies the last two conditions of the definition of

an inner product.

- In
*C*[0, 1], with inner product defined by (3), compute

**(a)**

_

*e**x*, *e*−*x*_

**(b) **_*x*, sin *π**x*_ **(c)**

_

*x*2, *x*3_

- In
*C*[0, 1], with inner product defined by (3), consider

the vectors 1 and *x*.

**(a) **Find the angle *θ *between 1 and *x*.

**(b) **Determine the vector projection **p **of 1 onto *x*

and verify that 1 − **p **is orthogonal to **p**.

**(c) **Compute _1−**p**_, _**p**_, _1_ and verify that the

Pythagorean law holds.

- In
*C*[−*π*,*π*] with inner product defined by (6),

show that cos *mx *and sin *nx *are orthogonal and

that both are unit vectors. Determine the distance

between the two vectors.

- Show that the functions
*x*and*x*2 are orthogonal

in *P*5 with inner product defined by (5), where

*x**i *= (*i *− 3)*/*2 for *i *= 1, *. . . *, 5.

- In
*P*5 with inner product as in Exercise 10 and norm

defined by

_*p*_ =

_*p*, *p*_ =

_5

*i*=1

_

*p*(*x**i*)

_2

1*/*2

compute

**(a) **_*x*_ **(b) **_*x*2_

**(c) **the distance between *x *and *x*2

- If
*V*is an inner product space, show that

_**v**_ =

_**v**, **v**_

satisfies the first two conditions in the definition of

a norm.

- Show that

_**x**_1 =

*n*

*i*=1

|*x**i*|

defines a norm on R*n*.

- Show that

_**x**_∞ = max

1≤*i*≤*n*

|*x**i*|

defines a norm on R*n*.

- Compute _
**x**_1, _**x**_2, and _**x**_∞ for each of the following

vectors in R3.

**(a) x **= (−3, 4, 0)*T ***(b) x **= (−1,−1, 2)*T*

**(c) x **= (1, 1, 1)*T*

- Let
**x**= (5, 2, 4)*T*and**y**= (3, 3, 2)*T*. Compute

_**x **− **y**_1, _**x **− **y**_2, and _**x **− **y**_∞. Under which

norm are the two vectors closest together? Under

which norm are they farthest apart?

- Let
**x**and**y**be vectors in an inner product space.

Show that if **x **⊥ **y **then the distance between **x **and

**y **is

_

_**x**_2 + _**y**_2_1*/*2

- Show that if
**u**and**v**are vectors in an inner product

space that satisfy the Pythagorean law

_**u **+ **v**_2 = _**u**_2 + _**v**_2

then **u **and **v **must be orthogonal.

- In R
*n*with inner product

_**x**, **y**_ = **x***T***y**

derive a formula for the distance between two vectors

**x **= (*x*1, *. . . *, *x**n*)*T *and **y **= (*y*1, *. . . *, *y**n*)*T *.

- Let
*A*be a nonsingular*n*×*n*matrix and for each

vector **x **in R*n *define

_**x**_*A *= _*A***x**_2 (11)

Show that (11) defines a norm on R*n*.

- Let
**x**∈ R*n*. Show that _**x**_∞ ≤ _**x**_2. - Let
**x**∈ R2. Show that _**x**_2 ≤ _**x**_1. [*Hint*: Write

**x **in the form *x*1**e**1 + *x*2**e**2 and use the triangle

inequality.]

- Give an example of a nonzero vector
**x**∈ R2 for

which

_**x**_∞ = _**x**_2 = _**x**_1

- Show that in any vector space with a norm

_−**v**_ = _**v**_

- Show that for any
**u**and**v**in a normed vector space

_**u **+ **v**_ ≥ | _**u**_ − _**v**_ |

- Prove that, for any
**u**and**v**in an inner product

space *V*,

_**u **+ **v**_2 + _**u **− **v**_2 = 2_**u**_2 + 2_**v**_2

Give a geometric interpretation of this result for the

vector space R2.

- The result of Exercise 26 is not valid for norms

other than the norm derived from the inner product.

Give an example of this in R2 using _ · _1.

- Determine whether the following define norms on

*C*[*a*, *b*]:

**(a) **_ *f*_ = |*f *(*a*)| + |*f *(*b*)|

**(b) **_ *f*_ =

! *b*

*a*

| *f *(*x*)| *dx*

**(c) **_ *f*_ = max

*a*≤*x*≤*b*

| *f *(*x*)|

- Let
**x**∈ R*n*and show that

**(a) **_**x**_1 ≤ *n*_**x**_∞ **(b) **_**x**_2 ≤

√

*n *_**x**_∞

Give examples of vectors in R*n *for which equality

holds in parts (a) and (b).

- Sketch the set of points (
*x*1,*x*2) =**x***T*in R2 such

that

**(a) **_**x**_2 = 1 **(b) **_**x**_1 = 1 **(c) **_**x**_∞ = 1

- Let
*K*be an*n*×*n*matrix of the form

*K *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 −*c *−*c *· · · −*c *−*c*

0 *s *−*sc *· · · −*sc *−*sc*

0 0 *s*2 · · · −*s*2*c *−*s*2*c*

…

0 0 0 · · · *s**n*−2 −*s**n*−2*c*

0 0 0 · · · 0 *s**n*−1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

where *c*2 + *s*2 = 1. Show that _*K*_*F *=

√

*n*.

- The
*trace*of an*n*×*n*matrix*C*, denoted tr(*C*), is the

sum of its diagonal entries; that is,

tr(*C*) = *c*11 + *c*22 +· · ·+*c**nn*

If *A *and *B *are *m *× *n *matrices, show that

**(a) **_*A*_2

*F*

= tr(*A**T**A*)

**(b) **_*A *+ *B*_2

*F*

= _*A*_2

*F*

+ 2 tr(*A**T**B*) + _*B*_2

*F*.

- Consider the vector space R
*n*with inner product

_**x**, **y**_ = **x***T***y**. Show that for any *n *× *n *matrix *A*,

**(a) **_*A***x**, **y**_ =

_

**x**, *A**T***y**

_

**(b)**

_

*A**T**A***x**, **x**

_

= _*A***x**_2

SECTION 5.5 EXERCISES

- Which of the following sets of vectors form an

orthonormal basis for R2?

**(a) **{(1, 0)*T *, (0, 1)*T *}

**(b)**

_

3

5

,

4

5

_*T*

,

5

13

,

12

13

_*T*

**(c) **{(1,−1)*T *, (1, 1)*T *}

**(d)**

⎧⎨

⎩

_√

3

2

,

1

2

_*T*

,

_

−1

2

,

√

3

2

_*T*

⎫⎬

⎭

- Let

**u**1 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

3

√

2

1

3

√

2

− 4

3

√

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

, **u**2 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

2

3

2

3

1

3

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

, **u**3 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

√1

2

−√1

2

0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Show that {**u**1, **u**2, **u**3} is an orthonormal basis

for R3.

**(b) **Let **x **= (1, 1, 1)*T *.Write **x **as a linear combination

of **u**1, **u**2, and **u**3 using Theorem 5.5.2 and

use Parseval’s formula to compute _**x**_.

- Let
*S*be the subspace of R3 spanned by the vectors

**u**2 and **u**3 of Exercise 2. Let **x **= (1, 2, 2)*T*. Find the

projection **p **of **x **onto *S*. Show that (**p **− **x**) ⊥ **u**2

and (**p **− **x**) ⊥ **u**3.

- Let
*θ*be a fixed real number and let

**x**1 =

⎧⎪⎩

cos *θ*

sin *θ*

⎫⎪⎭

and **x**2 =

⎧⎪⎩

−sin *θ*

cos *θ*

⎫⎪⎭

**(a) **Show that {**x**1, **x**2} is an orthonormal basis for

R2.

**(b) **Given a vector **y **in R2, write it as a linear

combination *c*1**x**1 + *c*2**x**2.

**(c) **Verify that

*c*21

+ *c*22

= _**y**_2 = *y*21

+ *y*22

- Let
**u**1 and**u**2 form an orthonormal basis for R2 and

let **u **be a unit vector in R2. If **u***T***u**1 = 1

2 , determine

the value of |**u***T***u**2|.

- Let {
**u**1,**u**2,**u**3} be an orthonormal basis for an inner

product space *V *and let

**u **= **u**1 + 2**u**2 + 2**u**3 and **v **= **u**1 + 7**u**3

Determine the value of each of the following:

**(a) **_**u**, **v**_ **(b) **_**u**_ and _**v**_

**(c) **The angle *θ *between **u **and **v**

- Let {
**u**1,**u**2,**u**3} be an orthonormal basis for an inner

product space *V*. If **x **= *c*1**u**1 + *c*2**u**2 + *c*3**u**3 is

a vector with the properties _**x**_ = 5, _**u**1, **x**_ = 4,

and **x **⊥ **u**2, then what are the possible values of *c*1,

*c*2, *c*3?

- The functions cos
*x*and sin*x*form an orthonormal

set in *C*[−*π*, *π*]. If

*f *(*x*) = 3 cos *x*+2 sin *x *and *g*(*x*) = cos *x*−sin *x*

use Corollary 5.5.3 to determine the value of

_*f *, *g*_ = 1

*π*

_ *π*

−*π*

*f *(*x*)*g*(*x*) *dx*

- The set

*S *=

”

1 √

2

, cos *x*, cos 2*x*, cos 3*x*, cos 4*x*

(

is an orthonormal set of vectors in *C*[−*π*, *π*] with

inner product defined by (2).

**(a) **Use trigonometric identities to write the function

sin4 *x *as a linear combination of elements

of *S*.

**(b) **Use part (a) and Theorem 5.5.2 to find the

values of the following integrals:

**(a)**

! *π*

−*π *sin4 *x *cos *x dx ***(b)**

! *π*

−*π *sin4 *x *cos 2*x dx*

**(c)**

! *π*

−*π *sin4 *x *cos 3*x dx ***(d)**

! *π*

−*π *sin4 *x *cos 4*x dx*

- Write out the Fourier matrix
*F*8. Show that*F*8*P*8

can be partitioned into block form:

⎧⎪⎩

*F*4 *D*4*F*4

*F*4 −*D*4*F*4

⎫⎪⎭

- Prove that the transpose of an orthogonal matrix is

an orthogonal matrix.

- If
*Q*is an*n*×*n*orthogonal matrix and**x**and

**y **are nonzero vectors in R*n*, then how does the

angle between *Q***x **and *Q***y **compare with the angle

between **x **and **y**? Prove your answer.

- Let
*Q*be an*n*×*n*orthogonal matrix. Use mathematical

induction to prove each of the following.

**(a) **(*Q**m*)−1 = (*Q**T *)*m *= (*Q**m*)*T *for any positive

integer *m*.

**(b) **_*Q**m***x**_ = _**x**_ for any **x **∈ R*n*.

- Let
**u**be a unit vector in R*n*and let*H*=*I*− 2**uu***T*.

Show that *H *is both orthogonal and symmetric and

hence is its own inverse.

- Let
*Q*be an orthogonal matrix and let*d*= det(*Q*).

Show that |*d*| = 1.

- Show that the product of two orthogonal matrices

is also an orthogonal matrix. Is the product of

two permutation matrices a permutation matrix?

Explain.

- How many
*n*×*n*permutation matrices are there? - Show that if
*P*is a symmetric permutation matrix

then *P*2*k *= *I *and *P*2*k*+1 = *P*.

- Show that if
*U*is an*n*×*n*orthogonal matrix then

**u**1**u***T*1

+ **u**2**u***T*2

+· · ·+**u***n***u***Tn*

= *I*

- Use mathematical induction to show that if
*Q*∈

R*n*×*n *is both upper triangular and orthogonal, then

**q***j*

= ±**e***j*, *j *= 1, *. . . *, *n*.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

2

−1

2

1

2

−1

2

1

2

1

2

1

2

1

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Show that the column vectors of *A *form an

orthonormal set in R4.

**(b) **Solve the least squares problem *A***x **= **b **for

each of the following choices of **b**.

**(a) b **= (4, 0, 0, 0)*T ***(b) b **= (1, 2, 3, 4)*T*

**(c) b **= (1, 1, 2, 2)*T*

- Let
*A*be the matrix given in Exercise 21.

**(a) **Find the projection matrix *P *that projects vectors

in R4 onto *R*(*A*).

**(b) **For each of your solutions **x **to Exercise 21(b),

compute *A***x **and compare it with *P***b**.

- Let
*A*be the matrix given in Exercise 21.

**(a) **Find an orthonormal basis for *N*(*A**T *).

**(b) **Determine the projection matrix *Q *that projects

vectors in R4 onto *N*(*A**T *).

- Let
*A*be an*m*×*n*matrix, let*P*be the projection

matrix that projects vectors in R*m *onto *R*(*A*), and

let *Q *be the projection matrix that projects vectors

in R*n *onto *R*(*A**T *). Show that

**(a) ***I *− *P *is the projection matrix from R*m *onto

*N*(*A**T *).

**(b) ***I *− *Q *is the projection matrix from R*n *onto

*N*(*A*).

- Let
*P*be the projection matrix corresponding to a

subspace *S *of R*m*. Show that

**(a) ***P*2 = *P ***(b) ***P**T *= *P*

- Let
*A*be an*m*×*n*matrix whose column vectors are

mutually orthogonal and let **b **∈ R*m*. Show that if **y**

is the least squares solution of the system *A***x **= **b**,

then

*y**i *= **b***T***a***i*

**a***Ti*

**a***i*

*i *= 1, *. . . *, *n*

- Let
**v**be a vector in an inner product space*V*and

let **p **be the projection of **v **onto an *n*-dimensional

subspace *S *of *V*. Show that _**p**_ ≤ _**v**_. Under what

conditions does equality occur.

- Let
**v**be a vector in an inner product space*V*and

let **p **be the projection of **v **onto an *n*-dimensional

subspace *S *of *V*. Show that _**p**_2 = _**p**, **v**_.

- Given the vector space
*C*[−1, 1] with inner product

_ *f *, *g*_ =

_ 1

−1

*f *(*x*)*g*(*x*) *dx*

and norm

_ *f*_ = *(*_*f *, *f *_*)*1*/*2

**(a) **Show that the vectors 1 and *x *are orthogonal.

**(b) **Compute _1_ and _*x*_.

**(c) **Find the best least squares approximation to

*x*1*/*3 on [−1, 1] by a linear function *l*(*x*) =

*c*11 + *c*2*x*.

**(d) **Sketch the graphs of *x*1*/*3 and *l*(*x*) on [−1, 1].

- Consider the inner product space
*C*[0, 1] with inner

product defined by

_*f *, *g*_ =

_ 1

0

*f *(*x*)*g*(*x*) *dx*

Let *S *be the subspace spanned by the vectors 1 and

2*x *− 1.

**(a) **Show that 1 and 2*x *− 1 are orthogonal.

**(b) **Determine _1_ and _2*x *− 1_.

**(c) **F√ind the best least squares approximation to

*x *by a function from the subspace *S*.

- Let

*S *= {1*/*

√

2, cos *x*, cos 2*x*, *. . . *, cos *nx*,

sin *x*, sin2*x*, *. . . *, sin*nx*}

Show that *S *is an orthonormal set in *C*[−*π*, *π*] with

inner product defined by (2).

- Find the best least squares approximation to
*f*(*x*) =

|*x*| on [−*π*, *π*] by a trigonometric polynomial of

degree less than or equal to 2.

- Let {
**x**1,**x**2,*. . .*,**x***k*,**x***k*+1,*. . .*,**x***n*} be an orthonormal

basis for an inner product space *V*. Let *S*1 be the

subspace of *V *spanned by **x**1, *. . . *, **x***k*, and let *S*2 be

the subspace spanned by **x***k*+1, **x***k*+2, *. . . *, **x***n*. Show

that *S*1 ⊥ *S*2.

- Let
**x**be an element of the inner product space*V*in

Exercise 33, and let **p**1 and **p**2 be the projections of

**x **onto *S*1 and *S*2, respectively. Show that

**(a) x **= **p**1

+ **p**2.

**(b) **if **x **∈ *S*⊥

1 , then **p**1

= **0 **and hence *S*⊥

1

= *S*2.

- Let
*S*be a subspace of an inner product space*V*.

Let {**x**1, *. . . *, **x***n*} be an orthogonal basis for *S *and

let **x **∈ *V*. Show that the best least squares approximation

to **x **by elements of *S *is given by

**p **=

*n*

*i*=1

_**x**, **x***i*_

_**x***i*, **x***i*_ **x***i*

- A (real or complex) scalar
*u*is said to be an*n*th root

of unity if *u**n *= 1.

**(a) **Show that if *u *is an *n*th root of unity and *u * = 1,

then

1 + *u*++*u*2 +· · ·+*u**n*−1 = 0

[*Hint*: 1−*u**n *= (1−*u*)(1+*u*+*u*2+· · ·+*u**n*−1)]

**(b) **Let *ω**n *= *e *2*π**i*

*n *. Use Euler’s formula (*e**i**θ *=

cos *θ *+ *i *sin *θ*) to show that *ω**n *is an *n*th root

of unity.

**(c) **Show that if *j *and *k *are positive integers and if

*u**j *= *ω**j*−1

*n *and *z**k *= *ω*

−(*k*−1)

*n *, then *u**j*, *z**k*, and *u**j**z**k*

are all *n*th roots of unity.

- Let
*ω**n*,*u**j*, and*z**k*be defined as in Exercise 36. If*F**n*

is the *n *× *n *Fourier matrix, then its (*j*, *s*) entry is

*f**js *= *ω*(*j*−1)(*s*−1)

*n*

= *u**s*−1

Let *G**n *be the matrix defined by

*g**sk *= 1

*f**sk*

= *ω*

−(*s*−1)(*k*−1) = *z**s*−1

*k *, 1≤ *s *≤ *n*,

1 ≤ *k *≤ *n*

Show that the the (*j*, *k*) entry of *F**n**G**n *is

1 + *u**j**z**k *+ (*u**j**z**k*)2 +· · ·+(*u**j**z**k*)*n*−1

- Use the results from Exercises 36 and 37 to show

that *F**n *is nonsingular and

*F*−1

*n*

= 1

*n*

*G**n *= 1

*n*

*F**n*

where *F**n *is the matrix whose (*i*, *j*) entry is the

complex conjugate of *f**ij*.

SECTION 5.6 EXERCISES

- For each of the following, use the Gram–Schmidt

process to find an orthonormal basis for *R*(*A*).

**(a) ***A *=

⎧⎪⎩

−1 3

1 5

⎫⎪⎭

**(b) ***A *=

⎧⎪⎩

2 5

1 10

⎫⎪⎭

- Factor each of the matrices in Exercise 1 into a

product *QR*, where *Q *is an orthogonal matrix and

*R *is upper triangular.

- Given the basis {(1, 2,−2)
*T*, (4, 3, 2)*T*, (1, 2, 1)*T*} for

R3, use the Gram–Schmidt process to obtain an

orthonormal basis.

- Consider the vector space
*C*[−1, 1] with inner

product defined by

_*f *, *g*_ =

_ 1

−1

*f *(*x*)*g*(*x*) *dx*

Find an orthonormal basis for the subspace spanned

by 1, *x*, and *x*2.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

2 1

1 1

2 1

⎫⎪⎪⎪⎪⎪⎭

and **b **=

⎧⎪⎪⎪⎪⎪⎩

12

6

18

⎫⎪⎪⎪⎪⎪⎭

**(a) **Use the Gram–Schmidt process to find an orthonormal

basis for the column space of *A*.

**(b) **Factor *A *into a product *QR*, where *Q *has an orthonormal

set of column vectors and *R *is upper

triangular.

**(c) **Solve the least squares problem *A***x **= **b**

- Repeat Exercise 5 using

*A *=

⎧⎪⎪⎪⎪⎪⎩

3 −1

4 2

0 2

⎫⎪⎪⎪⎪⎪⎭

and **b **=

⎧⎪⎪⎪⎪⎪⎩

0

20

10

⎫⎪⎪⎪⎪⎪⎭

- Given
**x**1 = 1

2 (1, 1, 1,−1)*T *and **x**2 = 1

6 (1, 1, 3, 5)*T *,

verify that these vectors form an orthonormal set in

R4. Extend this set to an orthonormal basis for R4

by finding an orthonormal basis for the null space

of

⎧⎪⎩

1 1 1 −1

1 1 3 5

⎫⎪⎭

[*Hint*: First find a basis for the null space and then

use the Gram–Schmidt process.]

- Use the Gram–Schmidt process to find an orthonormal

basis for the subspace of R4 spanned

by **x**1 = (4, 2, 2, 1)*T *, **x**2 = (2, 0, 0, 2)*T*, and **x**3 =

(1, 1,−1, 1)*T *.

- Repeat Exercise 8 using the modified Gram–

Schmidt process and compare answers.

- Let
*A*be an*m*× 2 matrix. Show that if both the

classical Gram–Schmidt process and the modified

Gram–Schmidt process are applied to the column

vectors of *A*, then both algorithms will produce the

exact same QR factorization, even when the computations

are carried out in finite-precision arithmetic

(i.e., show that both algorithms will perform

the exact same arithmetic computations).

- Let
*A*be an*m*× 3 matrix. Let*QR*be the QR factorization

obtained when the classical Gram–Schmidt

process is applied to the column vectors of *A*, and

let ˜*Q *˜*R*be the factorization obtained when the modified

Gram–Schmidt process is used. Show that

if all computations were carried out using exact

arithmetic then we would have

˜*Q*

= *Q *and ˜*R *= *R*

and show that when the computations are done in

finite-precision arithmetic, ˜*r*23 will not necessarily

be equal to *r*23 and consequently ˜*r*33 and ˜**q**3 will not

necessarily be the same as *r*33 and **q**3.

- What will happen if the Gram–Schmidt process is

applied to a set of vectors {**v**1, **v**2, **v**3}, where **v**1 and

**v**2 are linearly independent, but **v**3 ∈ Span(**v**1, **v**2).

Will the process fail? If so, how? Explain.

- Let
*A*be an*m*×*n*matrix of rank*n*and let**b**∈ R*m*.

Show that if *Q *and *R *are the matrices derived from

applying the Gram–Schmidt process to the column

vectors of *A *and

**p **= *c*1**q**1

+ *c*2**q**2

+· · ·+*c**n***q***n*

is the projection of **b **onto *R*(*A*), then

**(a) c **= *Q**T***b (b) p **= *QQ**T***b**

**(c) ***QQ**T *= *A*(*A**T**A*)−1*A**T*

- Let
*U*be an*m*-dimensional subspace of R*n*and

let *V *be a *k*-dimensional subspace of *U*, where

0 *< **k **< **m*.

**(a) **Show that any orthonormal basis

{**v**1, **v**2, *. . . *, **v***k*}

for *V *can be expanded to form an orthonormal

basis {**v**1, **v**2, *. . . *, **v***k*, **v***k*+1, *. . . *, **v***m*} for *U*.

**(b) **Show that if *W *= Span(**v***k*+1, **v***k*+2 *. . . *, **v***m*),

then *U *= *V *⊕ *W*.

- (Dimension Theorem) Let
*U*and*V*be subspaces

of R*n*. In the case that *U *∩ *V *= {**0**}, we have the

following dimension relation

dim (*U *+ *V*) = dim *U *+ dim *V*

(See Exercise 18 in Section 3.4 of Chapter 3.) Make

use of the result from Exercise 14 to prove the more

general theorem

dim (*U *+ *V*) = dim *U *+ dim *V *− dim(*U *∩ *V*

SECTION 5.7 EXERCISES

- Use the recursion formulas to calculate (a)
*T*4,*T*5

and (b) *H*4, *H*5.

- Let
*p*0(*x*),*p*1(*x*), and*p*2(*x*) be orthogonal with respect

to the inner product

_*p*(*x*), *q*(*x*)_ =

_ 1

−1

*p*(*x*)*q*(*x*)

1 + *x*2 *dx*

Use Theorem 5.7.2 to calculate *p*1(*x*) and *p*2(*x*) if

all polynomials have lead coefficient 1.

- Show that the Chebyshev polynomials have the

following properties:

**(a) **2*T**m*(*x*)*T**n*(*x*) = *T**m*+*n*(*x*) + *T**m*−*n*(*x*), for *m **> **n*

**(b) ***T**m*(*T**n*(*x*)) = *T**mn*(*x*)

- Find the best quadratic least squares approximation

to *e**x *on [−1, 1] with respect to the inner product

_*f *, *g*_ =

_ 1

−1

*f *(*x*)*g*(*x*) *dx*

- Let
*p*0,*p*1,*. . .*be a sequence of orthogonal polynomials

and let *a**n *denote the lead coefficient of *p**n*.

Prove that

_*p**n*_2 = *a**n *_*x**n*, *p**n*_

- Let
*T**n*(*x*) denote the Chebyshev polynomial of

degree *n *and define

*U**n*−1(*x*) = 1

*n*

*T*_

*n*(*x*)

for *n *= 1, 2, *. . . *.

**(a) **Compute *U*0(*x*), *U*1(*x*), and *U*2(*x*).

**(b) **Show that if *x *= cos *θ*, then

*U**n*−1(*x*) = sin *n**θ*

sin *θ*

- Let
*U**n*−1(*x*) be defined as in Exercise 6 for*n*≥ 1

and define *U*−1(*x*) = 0. Show that

**(a) ***T**n*(*x*) = *U**n*(*x*) − *xU**n*−1(*x*), for *n *≥ 0

**(b) ***U**n*(*x*) = 2*xU**n*−1(*x*) − *U**n*−2(*x*), for *n *≥ 1

- Show that the
*U**i*’s defined in Exercise 6 are orthogonal

with respect to the inner product

_*p*, *q*_ =

_ 1

−1

*p*(*x*)*q*(*x*)(1 − *x*2)1*/*2 *dx*

The *U**i*’s are called *Chebyshev polynomials of the*

*second kind.*

- Verify that the Legendre polynomial
*P**n*(*x*) satisfies

the second-order equation

(1 − *x*2)*y*__ − 2*xy*_ + *n*(*n *+ 1)*y *= 0

for *n *= 0, 1, 2.

- Prove each of the following.

**(a) ***H*_

*n*(*x*) = 2*nH**n*−1(*x*), *n *= 0, 1, *. . .*

**(b) ***H*__

*n *(*x*)−2*xH*_

*n*(*x*)+2*nH**n*(*x*) = 0, *n *= 0, 1, *. . .*

- Given a function
*f*(*x*) that passes through the points

(1, 2), (2,−1), and (3, 4), use the Lagrange interpolating

formula to construct a second-degree

polynomial that interpolates *f *at the given points.

- Show that if
*f*(*x*) is a polynomial of degree less than

*n*, then *f *(*x*)must equal the interpolating polynomial

*P*(*x*) in (7) and hence the sum in (7) gives the exact

value for

! *b*

*a **f *(*x*)*w*(*x*) *dx*.

- Use the zeros of the Legendre polynomial
*P*2(*x*) to

obtain a two-point quadrature formula

_ 1

−1

*f *(*x*) *dx *≈ *A*1*f *(*x*1) + *A*2*f *(*x*2)

**(a)**For what degree polynomials will the quadrature

formula in Exercise 13 be exact?

**(b) **Use the formula from Exercise 13 to approximate

_ 1

−1

(*x*3+3*x*2+1) *dx *and

_ 1

−1

1

1 + *x*2 *dx*

How do the approximations compare with the

actual values?

- Let
*x*1,*x*2, . . . ,*x**n*be distinct points in the interval

[−1, 1] and let

*A**i *=

_ 1

−1

*L**i*(*x*)*dx*, *i *= 1, *. . . *, *n*

where the *L**i*’s are the Lagrange functions for the

points *x*1, *x*2, . . . , *x**n*.

**(a) **Explain why the quadrature formula

_ 1

−1

*f *(*x*)*dx *= *A*1 *f *(*x*1)+*A*2 *f *(*x*2) + · · · + *A**n **f *(*x**n*)

will yield the exact value of the integral

whenever *f *(*x*) is a polynomial of degree less

than *n*.

**(b) **Apply the quadrature formula to a polynomial

of degree 0 and show that

*A*1 + *A*2 +· · ·+*A**n *= 2

- Let
*x*1,*x*2, . . . ,*x**n*be the roots of the Legendre polynomial

*P**n*. If the *A**i*’s are defined as in Exercise 15,

then the quadrature formula

_ 1

−1

*f *(*x*)*dx *= *A*1 *f *(*x*1) + *A*2 *f *(*x*2)+· · ·+*A**n **f *(*x**n*)

will be exact for all polynomials of degree less than

2*n*.

**(a) **Show that if 1 ≤ *j **< *2*n*, then

*P**j*(*x*1)*A*1+*P**j*(*x*2)*A*2 +· · ·+*P**j*(*x**n*)*A**n*=

_

1, *P**j*

_

=0

**(b) **Use the results from part (a) and from Exercise

15 to set up a nonhomogeneous *n *× *n*

linear system for determining the coefficients

*A*1, *A*2, . . . , *A**n*.

- Let
*Q*0(*x*),*Q*1(*x*),*. . .*be an orthonormal sequence of

polynomials, that is, it is an orthogonal sequence of

polynomials and _*Q**k*_ = 1 for each *k*.

**(a) **How can the recursion relation in Theorem

5.7.2 be simplified in the case of an

orthonormal sequence of polynomials?

**(b) **Let *λ *be a root of *Q**n*. Show that *λ *must satisfy the matrix equation

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*β*1 *α*1

*α*1 *β*2 *α*2

. . .

. . .

. . .

*α**n*−2 *β**n*−1 *α**n*−1

*α**n*−1 *β**n*

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*Q*0(*λ*)

*Q*1(*λ*)

…

*Q**n*−2(*λ*)

*Q**n*−1(*λ*)

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

= *λ*

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*Q*0(*λ*)

*Q*1(*λ*)

…

*Q**n*−2(*λ*)

*Q**n*−1(*λ*)

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

where the *α**i*’s and *β**j*’s are the coefficients from the recursion equations.

**MATLAB EXERCISES**

- Set

**x **= [ 0 : 4, 4,−4, 1, 1]_ and **y **= **ones**(9, 1)

**(a) **Use the MATLAB function **norm **to compute

the values of _**x**_, _**y**_, _**x **+ **y**_ and to verify

that the triangle inequality holds. UseMATLAB

also to verify that the parallelogram law

_**x **+ **y**_2 + _**x **− **y**_2 = 2(_**x**_2 + _**y**_2)

is satisfied.

**(b) **If

*t *= **x***T***y**

_**x**__**y**_

then why do we know that |*t*| must be less

than or equal to 1? Use MATLAB to compute

the value of *t *and use the MATLAB function

**acos **to compute the angle between **x **and **y**.

Convert the angle to degrees by multiplying by

180*/π*. (Note that the number *π *is given by **pi**

in MATLAB.)

**(c) **Use MATLAB to compute the vector projection

**p **of **x **onto **y**. Set **z **= **x **− **p **and verify that

**z **is orthogonal to **p **by computing the scalar

product of the two vectors. Compute _**x**_2 and

_**z**_2+_**p**_2 and verify that the Pythagorean law

is satisfied.

- (Least Squares Fit to a Data Set by a Linear Function)

The following table of *x *and *y *values was given

in Section 5.3 of this chapter (see Figure 5.3.3).

*x *−1.0 0.0 2.1 2.3 2.4 5.3 6.0 6.5 8.0

*y *−1.02 −0.52 0.55 0.70 0.70 2.13 2.52 2.82 3.54

The nine data points are nearly linear and hence

the data can be approximated by a linear function

*z *= *c*1*x *+ *c*2. Enter the *x *and *y *coordinates of the

data points as column vectors **x **and **y**, respectively.

Set *V *= [ **x**, **ones**(**size**(**x**))] and use the MATLAB

“” operation to compute the coefficients *c*1

and *c*2 as the least squares solution to the 9 × 2 linear

system *V***c **= **y**. To see the results graphically,

set

**w **= −1 : 0.1 : 8

and

**z **= *c*(1) ∗ **w **+ *c*(2) ∗ **ones**(**size**(**w**))

and plot the original data points and the least squares

linear fit, using the MATLAB command

**plot**(**x**, **y**, ‘*x*’,**w**, **z**)

- (Construction of Temperature Profiles by Least

Squares Polynomials) Among the important inputs

in weather forecasting models are data sets consisting

of temperature values at various parts of the

atmosphere. These values are either measured directly

using weather balloons or inferred from remote

soundings taken by weather satellites. A typical set

of RAOB (weather balloon) data is given next. The

temperature *T *in kelvins may be considered as a

function of *p*, the atmospheric pressure measured in

decibars. Pressures in the range from 1 to 3 decibars

correspond to the top of the atmosphere, and those

in the range from 9 to 10 decibars correspond to the

lower part of the atmosphere.

*p *1 2 3 4 5 6 7 8 9 10

*T *222 227 223 233 244 253 260 266 270 266

**(a) **Enter the pressure values as a column vector **p**

by setting **p **= [1 : 10]_, and enter the temperature

values as a column vector **T**. To find

the best least squares fit to the data by a linear

function *c*1*x*+*c*2, set up an overdetermined system

*V***c **= **T**. The coefficient matrix *V *can be

generated in MATLAB by setting

*V *= [ **p**, **ones**(10, 1)]

or, alternatively, by setting

*A *= **vander**(**p**); *V *= *A*(:, 9 : 10)

**Note **For any vector **x **= (*x*1, *x*2, *. . . *, *x**n*+1)*T *,

the MATLAB command **vander**(**x**) generates

a full Vandermonde matrix of the form

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*x**n*

1 *x**n*−1

1

- · ·
*x*1 1

*x**n*

2 *x**n*−1

2

- · ·
*x*2 1

…

*x**n*

*n*+1 *x**n*−1

*n*+1

- · ·
*x**n*+1 1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

For a linear fit, only the last two columns of

the full Vandermonde matrix are used. More

information on the **vander **function can be

obtained by typing **help vander**. Once *V*

has been constructed, the least squares solution

**c **of the system can be calculated using the

MATLAB “” operation.

**(b) **To see how well the linear function fits the data,

define a range of pressure values by setting

**q **= 1 : 0.1 : 10;

The corresponding function values can be determined

by setting

**z **= **polyval**(**c**, **q**);

We can plot the function and the data points with

the command

**plot**(**q**, **z**, **p**, **T**, ‘*x*’)

**(c) **Let us now try to obtain a better fit by using

a cubic polynomial approximation. Again

we can calculate the coefficients of the cubic

polynomial

*c*1*x*3 + *c*2*x*2 + *c*3*x *+ *c*4

that gives the best least squares fit to the data

by finding the least squares solution of an overdetermined

system *V***c **= **T**. The coefficient

matrix *V *is determined by taking the last four

columns of the matrix *A *= **vander**(**p**). To see

the results graphically, again set

**z **= **polyval**(**c**, **q**)

and plot the cubic function and data points, using

the same plot command as before. Where do

you get the better fit, at the top or bottom of the

atmosphere?

**(d) **To obtain a good fit at both the top and bottom

of the atmosphere, try using a sixth-degree

polynomial. Determine the coefficients as before

using the last seven columns of *A*. Set **z **=

**polyval**(**c**, **q**) and plot the results.

- (Least Squares Circles) The parametric equations for

a circle with center (3, 1) and radius 2 are

*x *= 3 + 2 cos*t y*= 1 + 2 sin*t*

Set **t **= 0 : .5 : 6 and use MATLAB to generate

vectors of *x *and *y *coordinates for the corresponding

points on the circle. Next, add some noise to your

points by setting

**x **= **x**+0.1 ∗ **rand**(1, 13)

and

**y **= **y**+0.1 ∗ **rand**(1, 13)

Use MATLAB to determine the center **c **and radius

*r *of the circle that gives the best least squares fit to

the points. Set

**t1 **= 0 : 0.1 : 6.3

**x1 **= **c(1)**+**r **∗ **cos**(**t1**)

**y1 **= **c(2)**+**r **∗ **sin**(**t1**)

and use the command

**plot**(**x1**, **y1**, **x**, **y**, **‘x’**)

to plot the circle and the data points.

- (Fundamental Subspaces: Orthonormal Bases) The

vector spaces *N*(*A*), *R*(*A*), *N*(*A**T *), and *R*(*A**T*) are the

four fundamental subspaces associated with a matrix

*A*. We can use MATLAB to construct orthonormal

bases for each of the fundamental subspaces

associated with a given matrix. We can then construct

projection matrices corresponding to each

subspace.

**(a) **Set

*A *= **rand**(5, 2) ∗ **rand**(2, 5)

What would you expect the rank and nullity

of *A *to be? Explain. Use MATLAB to check

your answer by computing **rank**(*A*) and *Z *=

**null**(*A*). The columns of *Z *form an orthonormal

basis for *N*(*A*).

**(b) **Next, set

*Q *= **orth**(*A*), *W *= **null**(*A*_),

*S *= [*Q W*]

The matrix *S *should be orthogonal. Why? Explain.

Compute *S *∗ *S*_ and compare your result

to **eye**(5). In theory, *A**T**W *and *W**T**A *should both

consist entirely of zeros. Why? Explain. Use

MATLAB to compute *A**T**W *and *W**T**A*.

**(c) **Prove that if *Q *and *W *had been computed in

exact arithmetic, then we would have

*I *− *WW**T *= *QQ**T *and *QQ**T**A *= *A*

[*Hint*: Write *SS**T *in terms of *Q *and *W*.] Use

MATLAB to verify these identities.

**(d) **Prove that if Q had been calculated in exact

arithmetic, then we would have *QQ**T***b **= **b **for

all **b **∈ *R*(*A*). Use MATLAB to verify this property

by setting **b **= *A *∗ **rand**(5, 1) and then

computing *Q *∗ *Q*_ ∗ **b **and comparing it with **b**.

**(e) **Since the column vectors of *Q *form an orthonormal

basis for *R*(*A*), it follows that *QQ**T *is the

projection matrix corresponding to *R*(*A*). Thus,

for any **c **∈ R5, the vector **q **= *QQ**T***c **is the

projection of **c **onto *R*(*A*). Set **c **= **rand**(5, 1)

and compute the projection vector **q**. The vector

**r **= **c **− **q **should be in *N*(*A**T *). Why? Explain.

Use MATLAB to compute *A*_ ∗ **r**.

**(f) **The matrix *WW**T *is the projection matrix corresponding

to *N*(*A**T *). Use MATLAB to compute

the projection **w **= *WW**T***c **of **c **onto *N*(*A**T*) and

compare the result to **r**.

**(g) **Set *Y *= **orth**(*A*_) and use it to compute the

projection matrix *U *corresponding to *R*(*A**T *). Let

**b **= **rand**(5, 1) and compute the projection

vector **y **= *U *∗ **b **of **b **onto *R*(*A**T *). Compute

also *U *∗ **y **and compare it with **y**. The vector

**s **= **b **− **y **should be in *N*(*A*). Why? Explain.

Use MATLAB to compute *A *∗ **s**.

**(h) **Use the matrix *Z *= **null**(*A*) to compute

the projection matrix *V *corresponding to *N*(*A*).

Compute *V *∗ **b **and compare it with **s**.

**CHAPTER TEST A ****True or False**

For each statement that follows, answer *true *if the statement

is always true and *false *otherwise. In the case of

a true statement, explain or prove your answer. In the

case of a false statement, give an example to show that

the statement is not always true.

- If
**x**and**y**are nonzero vectors in R*n*, then the vector

projection of **x **onto **y **is equal to the vector

projection of **y **onto **x**.

- If
**x**and**y**are unit vectors in R*n*and |**x***T***y**| = 1, then

**x **and **y **are linearly independent.

- If
*U*,*V*, and*W*are subspaces of R3 and if*U*⊥*V*

and *V *⊥ *W*, then *U *⊥ *W*.

- It is possible to find a nonzero vector
**y**in the column

space of *A *such that *A**T***y **= **0**.

- If
*A*is an*m*×*n*matrix, then*AA**T*and*A**T**A*have the

same rank.

- If an
*m*×*n*matrix*A*has linearly dependent columns

and **b **is a vector in R*m*, then **b **does not have a unique

projection onto the column space of *A*.

- If
*N*(*A*) = {**0**}, then the system*A***x**=**b**will have a

unique least squares solution.

- If
*Q*1 and*Q*2 are orthogonal matrices, then*Q*1*Q*2 also

is an orthogonal matrix.

- If {
**u**1,**u**2,*. . .*,**u***k*} is an orthonormal set of vectors in

R*n *and

*U *= (**u**1, **u**2, *. . . *, **u***k*)

then *U**T**U *= *I**k *(the *k *× *k *identity matrix).

- If {
**u**1,**u**2,*. . .*,**u***k*} is an orthonormal set of vectors in

R*n *and

*U *= (**u**1, **u**2, *. . . *, **u***k*)

then *UU**T *= *I**n *(the *n *× *n *identity matrix).

**CHAPTER TEST B**

- Let

**x **=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

1

2

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

and **y **=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

−2

1

2

0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Find the vector projection **p **of **x **onto **y**.

**(b) **Verify that **x **− **p **is orthogonal to **p**.

**(c) **Verify that the Pythagorean Law holds for **x**, **p**,

and **x **− **p**.

- Let
**v**1 and**v**2 be vectors in an inner product space*V*.

**(a) **Is it possible for | _**v**1, **v**2_ | to be greater than

_**v**1_ _**v**2_? Explain.

**(b) **If

| _**v**1, **v**2_| = _**v**1_ _**v**2_

what can you conclude about the vectors **v**1 and

**v**2? Explain.

- Let
**v**1 and**v**2 be vectors in an inner product space*V*.

Show that

_**v**1 + **v**2_2 ≤ (_**v**1_ + _**v**2_)2

- Let
*A*be a 7 × 5 matrix with rank equal to 4 and

let **b **be a vector in R8. The four fundamental subspaces

associated with *A *are *R*(*A*), *N*(*A**T *), *R*(*A**T *), and

*N*(*A*).

**(a) **What is the dimension of *N*(*A**T *), and which of

the other fundamental subspaces is the orthogonal

complement of *N*(*A**T *)?

**(b) **If **x **is a vector in *R*(*A*) and *A**T***x **= **0**, then

what can you conclude about the value of _**x**_?

Explain.

**(c) **What is the dimension of *N*(*A**T**A*)? How many

solutions will the least squares system *A***x **= **b**

have? Explain.

- Let
**x**and**y**be vectors in R*n*and let*Q*be an*n*×*n*

orthogonal matrix. Show that if

**z **= *Q***x **and **w **= *Q***y**

then the angle between **z **and **w **is equal to the angle

between **x **and **y**.

- Let
*S*be the two-dimensional subspace of R3

spanned by

**x**1 =

⎧⎪⎪⎪⎪⎪⎩

1

0

2

⎫⎪⎪⎪⎪⎪⎭

and **x**2 =

⎧⎪⎪⎪⎪⎪⎩

0

1

−2

⎫⎪⎪⎪⎪⎪⎭

**(a) **Find a basis for *S*⊥.

**(b) **Give a geometric description of *S *and *S*⊥

**(c) **Determine the projection matrix *P *that projects

vectors in R3 onto *S*⊥.

- Given the table of data points

*x *−1 1 2

*y *1 3 3

find the best least squares fit by a linear function

*f *(*x*) = *c*1 + *c*2*x*.

- Let {
**u**1,**u**2,**u**3} be an orthonormal basis for a threedimensional

subspace *S *of an inner product space *V*,

and let

**x **= 2**u**1−2**u**2+**u**3 and **y **= 3**u**1+**u**2−4**u**3

**(a) **Determine the value of _**x**, **y**_.

**(b) **Determine the value of _**x**||.

- Let
*A*be a 7 × 5 matrix of rank 4. Let*P*and*Q*be the

projection matrices that project vectors in R7 onto

*R*(*A*) and *N*(*A**T *), respectively.

**(a) **Show that *PQ *= *O*.

**(b) **Show that *P *+ *Q *= *I*.

- Given

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 −3 −5

1 1 −2

1 −3 1

1 1 4

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

and **b **=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

−6

1

1

6

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

If the Gram-Schmidt process is applied to determine

an orthonormal basis for *R*(*A*), and a QR factorization

of *A *then, after the first two orthonormal vectors

**q**1 and **q**2 are computed, we have

*Q *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

2

−1

2

1

2

1

2

1

2

−1

2

1

2

1

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

*R *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

2 −2

0 4

0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Finish the process. Determine **q**3 and fill in the

third columns of *Q *and *R*.

**(b) **Use the QR factorization to find the least

squares solution of *A***x **= **b**

- The functions cos
*x*and sin*x*are both unit vectors in

*C*[−*π*, *π*] with inner product defined by

_*f *, *g*_ = 1

*π*

_ *π*

−*π*

*f *(*x*)*g*(*x*)*dx*

**(a) **Show that cos *x *⊥ sin *x*

**(b) **Determine the value of || cos *x *+ sin *x*||2.

- Consider the vector space
*C*[−1, 1] with inner

product defined by

_*f *, *g*_ =

_ 1

−1

*f *(*x*)*g*(*x*)*dx*

**(a) **Show that

*u*1(*x*) = 1 √

2

and *u*2(*x*) =

√

6

2

*x*

form an orthonormal set of vectors.

**(b) **Use the result from part (a) to find the best least

squares approximation to *h*(*x*) = *x*1*/*3 + *x*2*/*3 by

a linear function.

** **

**Chapter 6 ****Eigenvalues**

SECTION 6.1 EXERCISES

- Find the eigenvalues and the corresponding eigenspaces

for each of the following matrices:

**(a)**

⎧⎪⎩

3 2

4 1

⎫⎪⎭

**(b)**

⎧⎪⎩

6 −4

3 −1

⎫⎪⎭

**(c)**

⎧⎪⎩

3 −1

1 1

⎫⎪⎭

**(d)**

⎧⎪⎩

3 −8

2 3

⎫⎪⎭

**(e)**

⎧⎪⎩

1 1

−2 3

⎫⎪⎭

**(f)**

⎧⎪⎪⎪⎪⎪⎩

0 1 0

0 0 1

0 0 0

⎫⎪⎪⎪⎪⎪⎭

**(g)**

⎧⎪⎪⎪⎪⎪⎩

1 1 1

0 2 1

0 0 1

⎫⎪⎪⎪⎪⎪⎭

**(h)**

⎧⎪⎪⎪⎪⎪⎩

1 2 1

0 3 1

0 5 −1

⎫⎪

⎪⎪⎪⎪⎭

**(i)**

⎧⎪⎪

⎪⎪⎪⎩

4 −5 1

1 0 −1

0 1 −1

⎫⎪⎪⎪⎪⎪⎭

**(j)**

⎧⎪⎪⎪⎪⎪⎩

−2 0 1

1 0 −1

0 1 −1

⎫⎪⎪⎪⎪⎪⎭

**(k)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

2 0 0 0

0 2 0 0

0 0 3 0

0 0 0 4

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(l)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

3 0 0 0

4 1 0 0

0 0 2 1

0 0 0 2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

- Show that the eigenvalues of a triangular matrix are

the diagonal elements of the matrix.

- Let
*A*be an*n*×*n*matrix. Prove that*A*is singular if

and only if *λ *= 0 is an eigenvalue of *A*.

- Let
*A*be a nonsingular matrix and let*λ*be an eigenvalue

of *A*. Show that 1*/λ *is an eigenvalue of *A*−1.

- Let
*A*and*B*be*n*×*n*matrices. Show that if none of

the eigenvalues of *A *are equal to 1, then the matrix

equation

*XA *+ *B *= *X*

will have a unique solution.

- Let
*λ*be an eigenvalue of*A*and let**x**be an eigenvector

belonging to *λ*. Use mathematical induction

to show that, for *m *≥ 1, *λ**m *is an eigenvalue of *A**m*

and **x **is an eigenvector of *A**m *belonging to *λ**m*.

- Let
*A*be an*n*×*n*matrix and let*B*=*I*− 2*A*+*A*2.

**(a) **Show that if **x **is an eigenvector of *A *belonging

to an eigenvalue *λ*, then **x **is also an eigenvector

of *B *belonging to an eigenvalue *μ *of *B*. How

are *λ *and *μ *related?

**(b) **Show that if *λ *= 1 is an eigenvalue of *A*, then

the matrix *B *will be singular.

- An
*n*×*n*matrix*A*is said to be*idempotent*if

*A*2 = *A*. Show that if *λ *is an eigenvalue of an

idempotent matrix, then *λ *must be either 0 or 1.

- An
*n*×*n*matrix is said to be*nilpotent*if*A**k*=*O*for

some positive integer *k*. Show that all eigenvalues

of a nilpotent matrix are 0.

- Let
*A*be an*n*×*n*matrix and let*B*=*A*−*α**I*for

some scalar *α*. How do the eigenvalues of *A *and *B*

compare? Explain.

- Let
*A*be an*n*×*n*matrix and let*B*=*A*+*I*. Is it

possible for *A *and *B *to be similar? Explain.

- Show that
*A*and*A**T*have the same eigenvalues.

Do they necessarily have the same eigenvectors?

Explain.

- Show that the matrix

*A *=

⎧⎪⎩

cos *θ *−sin *θ*

sin *θ *cos *θ*

⎫⎪⎭

will have complex eigenvalues if *θ *is not a multiple

of *π*. Give a geometric interpretation of this result.

- Let
*A*be a 2 × 2 matrix. If tr(*A*) = 8 and det(*A*) =

12, what are the eigenvalues of *A*?

- Let
*A*= (*a**ij*) be an*n*×*n*matrix with eigenvalues

*λ*1, *. . . *, *λ**n*. Show that

*λ**j *= *a**jj *+

*i*_=*j*

(*a**ii *− *λ**i*) for *j *= 1, *. . . *, *n*

- Let
*A*be a 2 ×2 matrix and let*p*(*λ*) =*λ*2 +*b**λ*+*c*

be the characteristic polynomial of *A*. Show that

*b *= −tr(*A*) and *c *= det(*A*).

- Let
*λ*be a nonzero eigenvalue of*A*and let**x**be an

eigenvector belonging to *λ*. Show that *A**m***x **is also

an eigenvector belonging to *λ *for *m *= 1, 2, *. . . *.

- Let
*A*be an*n*×*n*matrix and let*λ*be an eigenvalue

of *A*. If *A*−*λ**I *has rank *k*, what is the dimension of

the eigenspace corresponding to *λ*? Explain.

- Let
*A*be an*n*×*n*matrix. Show that a vector**x**in

either R*n *or C*n *is an eigenvector belonging to *A *if

and only if the subspace *S *spanned by **x **and *A***x **has

dimension 1.

- Let
*α*=*a*+*bi*and*β*=*c*+*di*be complex scalars

and let *A *and *B *be matrices with complex entries.

**(a) **Show that

*α *+ *β *= *α *+ *β *and *αβ *= *α β*

**(b) **Show that the (*i*, *j*) entries of *AB *and *A B *are

equal and hence that

*AB *= *A B*

- Let
*Q*be an orthogonal matrix.

**(a) **Show that if *λ *is an eigenvalue of *Q*, then

|*λ*| = 1.

**(b) **Show that | det(*Q*)| = 1.

- Let
*Q*be an orthogonal matrix with an eigenvalue

*λ*1 = 1 and let **x **be an eigenvector belonging to *λ*1.

Show that **x **is also an eigenvector of *Q**T *.

- Let
*Q*be a 3 × 3 orthogonal matrix whose determinant

is equal to 1.

**(a) **If the eigenvalues of *Q *are all real and if they

are ordered so that *λ*1 ≥ *λ*2 ≥ *λ*3, determine

the values of all possible triples of eigenvalues

(*λ*1, *λ*2, *λ*3).

**(b) **In the case that the eigenvalues *λ*2 and *λ*3 are

complex, what are the possible values for *λ*1?

Explain.

**(c) **Explain why *λ *= 1 must be an eigenvalue of

*Q*.

- Let
**x**1,*. . .*,**x***r*be eigenvectors of an*n*×*n*matrix

*A *and let *S *be the subspace of R*n *spanned by

**x**1, **x**2, *. . . *, **x***r *. Show that *S *is *invariant *under *A *(i.e.,

show that *A***x **∈ *S *whenever **x **∈ *S*).

- Let
*A*be an*n*×*n*matrix and let*λ*be an eigenvalue

of *A*. Show that if *B *is any matrix that commutes

with *A*, then the eigenspace *N*(*A *− *λ**I*) is invariant

under *B*.

- Let
*B*=*S*−1*AS*and let**x**be an eigenvector of*B*

belonging to an eigenvalue *λ*. Show that *S***x **is an

eigenvector of *A *belonging to *λ*.

- Let
*A*be an*n*×*n*matrix with an eigenvalue*λ*and

let **x **be an eigenvector belonging to *λ*. Let *S *be a

nonsingular *n*×*n *matrix and let *α *be a scalar. Show

that if

*B *= *α**I *− *SAS*−1, **y **= *S***x**

then **y **is an eigenvector of *B*. Determine the eigenvalue

of *B *corresponding to **y**?

- Show that if two
*n*×*n*matrices*A*and*B*have a

common eigenvector **x **(but not necessarily a common

eigenvalue), then **x **will also be an eigenvector

of any matrix of the form *C *= *α**A *+ *β**B*.

- Let
*A*be an*n*×*n*matrix and let*λ*be a nonzero

eigenvalue of *A*. Show that if **x **is an eigenvector

belonging to *λ*, then **x **is in the column space of

*A*. Hence the eigenspace corresponding to*λ*is a

subspace of the column space of *A*.

- Let {
**u**1,**u**2,*. . .*,**u***n*} be an orthonormal basis for R*n*

and let *A *be a linear combination of the rank 1

matrices **u**1**u***T*1

, **u**2**u***T*2

, *. . . *, **u***n***u***Tn*

. If

*A *= *c*1**u**1**u***T*1

+ *c*2**u**2**u***T*2

+· · ·+*c**n***u***n***u***Tn*

show that *A *is a symmetric matrix with eigenvalues

*c*1, *c*2, *. . . *, *c**n *and that **u***i *is an eigenvector

belonging to *c**i *for each *i*.

- Let
*A*be a matrix whose columns all add up to a

fixed constant *δ*. Show that *δ *is an eigenvalue of *A*.

- Let
*λ*1 and*λ*2 be distinct eigenvalues of*A*. Let**x**be

an eigenvector of *A *belonging to *λ*1 and let **y **be an

eigenvector of *A**T *belonging to *λ*2. Show that **x **and

**y **are orthogonal.

- Let
*A*and*B*be*n*×*n*matrices. Show that

**(a) **If *λ *is a nonzero eigenvalue of *AB*, then it is

also an eigenvalue of *BA*.

**(b) **If *λ *= 0 is an eigenvalue of *AB*, then *λ *= 0 is

also an eigenvalue of *BA*.

- Prove that there do not exist
*n*×*n*matrices*A*and

*B *such that

*AB *− *BA *= *I*

[*Hint*: See Exercises 10 and 33.]

- Let
*p*(*λ*) = (−1)*n*(*λ**n*−*a**n*−1*λ**n*−1 −· · ·−*a*1*λ*−*a*0)

be a polynomial of degree *n *≥ 1, and let

*C *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*a**n*−1 *a**n*−2 · · · *a*1 *a*0

1 0 · · · 0 0

0 1 · · · 0 0

…

0 0 · · · 1 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Show that if *λ**i *is a root of *p*(*λ*) = 0, then *λ**i*

is an eigenvalue of *C *with eigenvector **x **=

(*λ**n*−1

*i *, *λ**n*−2

*i *, *. . . *, *λ**i*, 1)*T *.

**(b) **Use part (a) to show that if *p*(*λ*) has *n *distinct

roots then *p*(*λ*) is the characteristic polynomial

of *C*.

The matrix *C *is called the *companion matrix *of

*p*(*λ*).

- The result given in Exercise 35(b) holds even if all

the eigenvalues of *p*(*λ*) are not distinct. Prove this

as follows:

**(a) **Let

*D**m*(*λ*) =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*a**m **a**m*−1 · · · *a*1 *a*0

1 −*λ *· · · 0 0

…

0 0 · · · 1 −*λ*

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

and use mathematical induction to prove that

det(*D**m*(*λ*)) = (−1)*m*(*a**m**λ**m *+ *a**m*−1*λ**m*−1 +· · ·

+ *a*1*λ *+ *a*0)

**(b) **Show that

det(*C *− *λ**I*)

= (*a**n*−1 − *λ*)(−*λ*)*n*−1 − det(*D**n*−2)

= *p*(*λ*)

SECTION 6.2 EXERCISES

- Find the general solution of each of the following

systems:

**(a) ***y*_

1

= *y*1 + *y*2

*y*_

2

= −2*y*1 + 4*y*2

**(b) ***y*_

1

= 2*y*1 + 4*y*2

*y*_

2

= −*y*1 − 3*y*2

**(c) ***y*_

1

= *y*1 − 2*y*2

*y*_

2

= −2*y*1 + 4*y*2

**(d) ***y*_

1

= *y*1 − *y*2

*y*_

2

= *y*1 + *y*2

**(e) ***y*_

1

= 3*y*1 − 2*y*2

*y*_

2

= 2*y*1 + 3*y*2

**(f) ***y*_

1

= *y*1 + *y*3

*y*_

2

= 2*y*2 + 6*y*3

*y*_

3

= *y*2 + 3*y*3

- Solve each of the following initial value problems:

**(a) ***y*_

1

= −*y*1+ 2*y*2

*y*_

2

= 2*y*1− *y*2

*y*1(0) = 3, *y*2(0) = 1

**(b) ***y*_

1

= *y*1 − 2*y*2

*y*_

2

= 2*y*1 + *y*2

*y*1(0) = 1, *y*2(0) = −2

**(c) ***y*_

1

= 2*y*1 − 6*y*3

*y*_

2

= *y*1 − 3*y*3

*y*_

3

= *y*2 − 2*y*3

*y*1(0) = *y*2(0) = *y*3(0) = 2

**(d) ***y*_

1

= *y*1 + 2*y*3

*y*_

2

= *y*2 − *y*3

*y*_

3

= *y*1 + *y*2 + *y*3

*y*1(0) = *y*2(0) = 1, *y*3(0) = 4

- Given

**Y **= *c*1*e**λ*1*t***x**1 + *c*2*e**λ*2*t***x**2 +· · ·+*c**n**e**λ**n**t***x***n*

is the solution to the initial value problem:

**Y**_ = *A***Y**, **Y**(0) = **Y**0

**(a) **show that

**Y**0 = *c*1**x**1 + *c*2**x**2 +· · ·+*c**n***x***n*

**(b) **let *X *= (**x**1, *. . . *, **x***n*) and **c **= (*c*1, *. . . *, *c**n*)*T*. Assuming

that the vectors **x**1, *. . . *, **x***n *are linearly

independent, show that **c **= *X*−1**Y**0.

- Two tanks each contain 100 liters of a mixture. Initially,

the mixture in tank *A *contains 40 grams of

salt while tank *B *contains 20 grams of salt. Liquid

is pumped in and out of the tanks as shown in the

accompanying figure. Determine the amount of salt

in each tank at time *t*.

Mixture

16 L/min

Mixture

12 L/min

Mixture

4 L/min

Water

12 L/min

*B*

100 L

*A*

100 L

- Find the general solution of each of the following

systems:

**(a) ***y*__

1

= −2*y*2

*y*__

2

= *y*1 + 3*y*2

**(b) ***y*__

1

= 2*y*1 + *y*_

2

*y*__

2

= 2*y*2 + *y*_

1

- Solve the initial value problem

*y*__

1

= −2*y*2 + *y*_

1

+ 2*y*_

2

*y*__

2

= 2*y*1 + 2*y*_

1

− *y*_

2

*y*1(0) = 1, *y*2(0) = 0, *y*_

1(0) = −3, *y*_

2(0) = 2

- In Application 2, assume that the solutions are of

the form *x*1 = *a*1 sin *σ**t*, *x*2 = *a*2 sin *σ**t*. Substitute

these expressions into the system and solve for the

frequency *σ *and the amplitudes *a*1 and *a*2.

- Solve the the problem in Application 2, using the

initial conditions

*x*1(0) = *x*2(0) = 1, *x*_

1(0) = 4, and *x*_

2(0) = 2

- Two masses are connected by springs as shown in

the accompanying diagram. Both springs have the

same spring constant, and the end of the first spring

is fixed. If *x*1 and *x*2 represent the displacements

from the equilibrium position, derive a system of

second-order differential equations that describes

the motion of the system.

*m*1

*m*2

*x*1

*x*2

- Three masses are connected by a series of springs

between two fixed points as shown in the accompanying

figure. Assume that the springs all have

the same spring constant, and let *x*1(*t*), *x*2(*t*), and

*x*3(*t*) represent the displacements of the respective

masses at time *t*.

*A B*

*m*1 *m*2 *m*3

**(a) **Derive a system of second-order differential

equations that describes the motion of this

system.

**(b) **Solve the system if *m*1 = *m*3 = 1

3 , *m*2 = 1

4 ,

*k *= 1, and

*x*1(0) = *x*2(0) = *x*3(0) = 1

*x*_

1(0) = *x*_

2(0) = *x*_

3(0) = 0

- Transform the
*n*th-order equation

*y*(*n*) = *a*0*y *+ *a*1*y*_ +· · ·+*a**n*−1*y*(*n*−1)

into a system of first-order equations by setting

*y*1 = *y *and *y**j *= *y*_

*j*−1 for *j *= 2, *. . . *, *n*. Determine

the characteristic polynomial of the coefficient

matrix of this system.

SECTION 6.3 EXERCISES

- In each of the following, factor the matrix
*A*into a

product *XDX*−1, where *D *is diagonal:

**(a) ***A *=

⎧⎪⎩

0 1

1 0

⎫⎪⎭

**(b) ***A *=

⎧⎪⎩

5 6

−2 −2

⎫⎪⎭

**(c) ***A *=

⎧⎪⎩

2 −8

1 −4

⎫⎪⎭

**(d) ***A *=

⎧⎪⎪⎪⎪⎪⎩

2 2 1

0 1 2

0 0 −1

⎫⎪⎪⎪⎪⎪⎭

**(e) ***A *=

⎧⎪⎪⎪⎪⎪⎩

1 0 0

−2 1 3

1 1 −1

⎫⎪⎪⎪⎪⎪⎭

**(f) ***A *=

⎧⎪⎪⎪⎪⎪⎩

1 2 −1

2 4 −2

3 6 −3

⎫⎪

⎪⎪⎪⎪⎭

- For each of the matrices in Exercise 1, use the

*XDX*−1 factorization to compute *A*6.

- For each of the nonsingular matrices in Exercise 1,

use the *XDX*−1 factorization to compute *A*−1.

- For each of the following, find a matrix
*B*such that

*B*2 = *A*.

**(a) ***A *=

⎧⎪⎩

2 1

−2 −1

⎫⎪⎭

**(b) ***A *=

⎧⎪⎪⎪⎪⎪⎩

9 −5 3

0 4 3

0 0 1

⎫⎪⎪⎪⎪⎪⎭

- Let
*A*be a nondefective*n*×*n*matrix with diagonalizing

matrix *X*. Show that the matrix *Y *= (*X*−1)*T*

diagonalizes *A**T *.

- Let
*A*be a diagonalizable matrix whose eigenvalues

are all either 1 or −1. Show that *A*−1 = *A*.

- Show that any 3 × 3 matrix of the form

⎧⎪⎪⎪⎪⎪⎩

*a *1 0

0 *a *1

0 0 *b*

⎫⎪⎪⎪⎪⎪⎭

is defective.

- For each of the following, find all possible values

of the scalar *α *that make the matrix defective or

show that no such values exist.

**(a)**

⎧⎪⎪⎪⎪⎪⎩

1 1 0

1 1 0

0 0 *α*

⎫⎪⎪⎪⎪⎪⎭

**(b)**

⎧⎪⎪⎪⎪⎪⎩

1 1 1

1 1 1

0 0 *α*

⎫⎪⎪⎪⎪⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩1

2

0

2 1 0

2 −1 *α*

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎩

4 6 −2

−1 −1 1

0 0 *α*

⎫

⎪⎪⎪⎪⎪⎭

**(e)**

⎧⎪⎪⎪⎪⎪⎩

3*α *1 0

0 *α *0

0 0 *α*

⎫⎪⎪⎪⎪⎪⎭

**(f)**

⎧⎪⎪⎪⎪⎪⎩

3*α *0 0

0 *α *1

0 0 *α*

⎫⎪⎪⎪⎪⎪⎭

**(g)**

⎧⎪⎪⎪⎪⎪⎩

*α *+2 1 0

0 *α *+2 0

0 0 2*α*

⎫⎪⎪⎪⎪⎪⎭

**(h)**

⎧⎪⎪⎪⎪⎪⎩

*α *+2 0 0

0 *α *+2 1

0 0 2*α*

⎫⎪⎪⎪⎪⎪⎭

- Let
*A*be a 4×4matrix and let*λ*be an eigenvalue of

multiplicity 3. If *A *− *λ**I *has rank 1, is *A *defective?

Explain.

- Let
*A*be an*n*×*n*matrix with positive real eigenvalues

*λ*1 *> λ*2 *> *· · · *> λ**n*. Let **x***i *be an

eigenvector belonging to *λ**i *for each *i*, and let **x **=

*α*1**x**1 +· · ·+*α**n***x***n*.

**(a) **Show that *A**m***x **=

*n*

*i*=1

*α**i**λ**mi*

**x***i*.

**(b) **Show that if *λ*1 = 1, then lim

*m*→∞

*A**m***x **= *α*1**x**1.

- Let
*A*be a*n*×*n*matrix with real entries and let

*λ*1 = *a *+ *bi *(where *a *and *b *are real and *b *_= 0) be

an eigenvalue of *A*. Let **z**1 = **x**+*i ***y **(where **x **and **y**

both have real entries) be an eigenvector belonging

to *λ*1 and let **z**2 = **x **− *i ***y**.

**(a) **Explain why **z**1 and **z**2 must be linearly independent.

**(b) **Show that **y **_= **0 **and that **x **and **y **are linearly

independent.

- Let
*A*be an*n*×*n*matrix with an eigenvalue*λ*of

multiplicity *n*. Show that *A *is diagonalizable if and

only if *A *= *λ**I*.

- Show that a nonzero nilpotent matrix is defective.
- Let
*A*be a diagonalizable matrix and let*X*be the

diagonalizing matrix. Show that the column vectors

of *X *that correspond to nonzero eigenvalues of

*A *form a basis for *R*(*A*).

- It follows from Exercise 14 that for a diagonalizable

matrix the number of nonzero eigenvalues

(counted according to multiplicity) equals the rank

of the matrix. Give an example of a defective

matrix whose rank is not equal to the number of

nonzero eigenvalues.

- Let
*A*be an*n*×*n*matrix and let*λ*be an eigenvalue

of *A *whose eigenspace has dimension *k*, where

1 *< **k **< **n*. Any basis {**x**1, *. . . *, **x***k*} for the eigenspace

can be extended to a basis {**x**1, *. . . *, **x***n*} for

R*n*. Let *X *= (**x**1, *. . . *, **x***n*) and *B *= *X*−1*AX*.

**(a) **Show that *B *is of the form

⎧⎪⎩

*λ**I B*12

*O B*22

⎫⎪⎭

where *I *is the *k *× *k *identity matrix.

**(b) **Use Theorem 6.1.1 to show that *λ *is an eigenvalue

of *A *with multiplicity at least *k*.

- Let
**x**,**y**be nonzero vectors in R*n*,*n*≥ 2, and let

*A *= **xy***T *. Show that

**(a) ***λ *= 0 is an eigenvalue of *A *with *n *− 1

linearly independent eigenvectors and consequently

has multiplicity at least *n *− 1 (see

Exercise 16).

**(b) **the remaining eigenvalue of *A *is

*λ**n *= tr *A *= **x***T***y**

and **x **is an eigenvector belonging to *λ**n*.

**(c) **if *λ**n *= **x***T***y **_= 0, then *A *is diagonalizable.

- Let
*A*be a diagonalizable*n*×*n*matrix. Prove that

if *B *is any matrix that is similar to *A*, then *B *is

diagonalizable.

- Show that if
*A*and*B*are two*n*×*n*matrices with

the same diagonalizing matrix *X*, then *AB *= *BA*.

- Let
*T*be an upper triangular matrix with distinct

diagonal entries (i.e., *t**ii *_= *t**jj *whenever *i *_= *j*).

Show that there is an upper triangular matrix *R *that

diagonalizes *T*.

- Each year, employees at a company are given the

option of donating to a local charity as part of a

payroll deduction plan. In general, 80 percent of

the employees enrolled in the plan in any one year

will choose to sign up again the following year,

and 30 percent of the unenrolled will choose to

enroll the following year. Determine the transition

matrix for the Markov process and find the steadystate

vector. What percentage of employees would

you expect to find enrolled in the program in the

long run?

- The city of Mawtookit maintains a constant population

of 300,000 people from year to year. A

political science study estimated that there were

150,000 Independents, 90,000 Democrats, and

60,000 Republicans in the town. It was also estimated

that each year 20 percent of the Independents

become Democrats and 10 percent become Republicans.

Similarly, 20 percent of the Democrats

become Independents and 10 percent become Republicans,

while 10 percent of the Republicans

defect to the Democrats and 10 percent become

Independents each year. Let

**x **=

⎧⎪⎪⎪⎪⎪⎩

150,000

90,000

60,000

⎫⎪⎪⎪⎪⎪⎭

and let **x**(1) be a vector representing the number of

people in each group after one year.

**(a) **Find a matrix *A *such that *A***x **= **x**(1).

**(b) **Show that *λ*1 = 1.0, *λ*2 = 0.5, and *λ*3 = 0.7

are the eigenvalues of *A*, and factor *A *into a

product *XDX*−1, where *D *is diagonal.

**(c) **Which group will dominate in the long run?

Justify your answer by computing lim

*n*→∞

*A**n***x**.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

2

1

3

1

5

1

4

1

3

2

5

1

4

1

3

2

5

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

be a transition matrix for a Markov process.

**(a) **Compute det(*A*) and trace(*A*) and make use of

those values to determine the eigenvalues of *A*.

**(b) **Explain why the Markov process must converge

to a steady-state vector.

**(c) **Show that **y **= (16, 15, 15)*T *is an eigenvector

of *A*. How is the steady-state vector related

to **y**?

- Let
*A*be a 3 × 2 matrix whose column vectors**a**1

and **a**2 are both probability vectors. Show that if **p**

is a probability vector in R2 and **y **= *A***p**, then **y **is

a probability vector in R3.

- Generalize the result from Exercise 24. Show that

if *A *is an *m *× *n *matrix whose column vectors are

all probability vectors and **p **is a probability vector

in R*n*, then the vector **y **= *A***x **will be probability

vector in R*m*.

- Consider a Web network consisting of only four

sites that are linked together as shown in the accompanying

diagram. If the Google PageRank algorithm

is used to rank these pages, determine the

transition matrix *A*. Assume that the Web surfer

will follow a link on the current page 85 percent

of the time.

- Let
*A*be an*n*×*n*stochastic matrix and let**e**be

the vector in R*n *whose entries are all equal to 1.

Show that **e **is an eigenvector of *A**T *. Explain why

a stochastic matrix must have *λ *= 1 as an eigenvalue.

- The transition matrix in Example 5 has the property

that both its rows and its columns add up to 1.

In general, a matrix *A *is said to be *doubly stochastic*

if both *A *and *A**T *are stochastic. Let *A *be an

*n *× *n *doubly stochastic matrix whose eigenvalues

satisfy

*λ*1 =1 and |*λ**j*| *< *1 for *j *= 2, 3, *. . . *, *n*

Show that if **e **is the vector in R*n *whose entries

are all equal to 1, then the Markov chain will

converge to the steady-state vector **x **= 1

*n ***e**

for any starting vector **x**0. Thus, for a doubly

stochastic transition matrix, the steady-state vector

will assign equal probabilities to all possible

outcomes.

- Let
*A*be the PageRank transition matrix and let

**x***k *be a vector in the Markov chain with starting

probability vector **x**0. Since *n *is very large,

the direct multiplication **x***k*+1 = *A***x***k *is computationally

intensive. However, the computation can

be simplified dramatically if we take advantage

of the structured components of *A *given in equation

(5). Because *M *is sparse, the multiplication

**w***k *= *M***x***k *is computationally much simpler. Show

that if we set

**b **= 1 − *p*

*n*

**e**

then

*E***x***k *= **e **and **x***k*+1 = *p***w***k *+ **b**

where *M*, *E*, **e**, and *p *are as defined in equation (5).

- Use the definition of the matrix exponential to

compute *e**A *for each of the following matrices:

**(a) ***A *=

⎧⎪⎩

1 1

−1 −1

⎫⎪⎭

**(b) ***A *=

⎧⎪⎩

1 1

0 1

⎫⎪⎭

**(c) ***A *=

⎧⎪⎪⎪⎪⎪⎩1

0

−

1

0 1 0

0 0 1

⎫⎪⎪⎪⎪⎪⎭

- Compute
*e**A*for each of the following matrices:

**(a) ***A *=

⎧⎪⎩

−2 −1

6 3

⎫⎪⎭

**(b) ***A *=

⎧⎪⎩

3 4

−2 −3

⎫⎪⎭

**(c) ***A *=

⎧⎪⎪⎪⎪⎪⎩

1 1 1

−1 −1 −1

1 1 1

⎫⎪⎪⎪⎪⎪⎭

- In each of the following, solve the initial value

problem **Y**_ = *A***Y**, **Y**(0) = **Y**0 by computing *e**tA***Y**0:

**(a) ***A *=

⎧⎪⎩

1 −2

0 −1

⎫⎪⎭

, **Y**0 =

⎧⎪⎩

1

1

⎫⎪⎭

**(b) ***A *=

⎧⎪⎩

2 3

−1 −2

⎫⎪⎭

, **Y**0 =

⎧⎪⎩−4

2

⎫⎪⎭

**(c) ***A *=

⎧⎪⎪⎪⎪⎪⎩

1 1 1

0 0 1

0 0 −1

⎫⎪⎪⎪⎪⎪⎭, **Y**0 =

⎧⎪⎪⎪⎪⎪⎩

1

1

1

⎫⎪⎪⎪⎪⎪⎭

**(d) ***A *=

⎧⎪⎪⎪⎪⎪⎩

1 1 1

1 0 1

−1 −1 −1

⎫⎪⎪⎪⎪⎪⎭

, **Y**0 =

⎧⎪⎪⎪⎪⎪⎩

1

1

−1

⎫⎪⎪⎪⎪⎪⎭

- Let
*λ*be an eigenvalue of an*n*×*n*matrix*A*and let

**x **be an eigenvector belonging to *λ*. Show that *e**λ *is

an eigenvalue of *e**A *and **x **is an eigenvector of *e**A*

belonging to *e**λ*.

- Show that
*e**A*is nonsingular for any diagonalizable

matrix *A*.

- Let
*A*be a diagonalizable matrix with characteristic

polynomial

*p*(*λ*) = *a*1*λ**n *+ *a*2*λ**n*−1 +· · ·+*a**n*+1

**(a) **Show that if *D *is a diagonal matrix whose

diagonal entries are the eigenvalues of *A*,

then

*p*(*D*) = *a*1*D**n *+ *a*2*D**n*−1 +· · ·+*a**n*+1*I *= *O*

**(b) **Show that *p*(*A*) = *O*.

**(c) **Show that if *a**n*+1 _= 0, then *A *is nonsingular

and *A*−1 = *q*(*A*) for some polynomial *q *of

degree less than *n*.

SECTION 6.4 EXERCISES

- For each of the following pairs of vectors
**z**and

**w**, compute (i) _**z**_, (ii) _**w**_, (iii) **z**,**w**_, and

(iv) **w**, **z**_:

**(a) z **=

⎧⎪⎩

4 + 2*i*

4*i*

⎫⎪⎭

, **w **=

⎧⎪⎩

−2

2 + *i*

⎫⎪⎭

**(b) z **=

⎧⎪⎪⎪⎪⎪⎩

1 + *i*

2*i*

3 − *i*

⎫⎪⎪⎪⎪⎪⎭

, **w **=

⎧⎪⎪⎪⎪⎪⎩

2 − 4*i*

5

2*i*

⎫⎪⎪⎪⎪⎪⎭

- Let

**z**1 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 + *i*

2

1 − *i*

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

and **z**2 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*i*√

2

− 1 √

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Show that {**z**1, **z**2} is an orthonormal set in C2.

**(b) **Write the vector **z **=

⎧⎪⎩

2 + 4*i*

−2*i*

⎫⎪⎭

as a linear

combination of **z**1 and **z**2.

- Let {
**u**1,**u**2} be an orthonormal basis for C2, and let

**z **= (4 + 2*i*)**u**1 + (6 − 5*i*)**u**2.

**(a) **What are the values of **u***H*1

**z**, **z***H***u**1, **u***H*2

**z**, and

**z***H***u**2?

**(b) **Determine the value of _**z**_.

- Which of the matrices that follow are Hermitian?

Normal?

**(a)**

⎧⎪⎩

1 − *i *2

2 3

⎫⎪⎭

**(b)**

⎧⎪⎩

1 2− *i*

2 + *i *−1

⎫⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 √

2

− 1 √

2

1 √

2

1 √

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 √

2

*i*

1 √

2

1 √

2

− 1 √

2

*i*

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(e)**

⎧⎪⎪⎪⎪⎪⎩

0 *i *1

*i *0 −2 + *i*

−1 2+ *i *0

⎫⎪⎪⎪⎪⎪⎭

**(f)**

⎧⎪⎪⎪⎪⎪⎩

3 1+*i i*

1 − *i *1 3

−*i *3 1

⎫⎪⎪⎪⎪⎪⎭ **5. **Find an orthogonal or unitary diagonalizing matrix

for each of the following:

**(a)**

⎧⎪⎩

2 1

1 2

⎫⎪⎭

**(b)**

⎧⎪⎩

1 3+ *i*

3 − *i *4

⎫⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

2 *i *0

−*i *2 0

0 0 2

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎩

2 1 1

1 3 −2

1 −2 3

⎫⎪⎪⎪⎪⎪⎭

**(e)**

⎧⎪⎪⎪⎪⎪⎩

0 0 1

0 1 0

1 0 0

⎫⎪⎪⎪⎪⎪⎭

**(f)**

⎧⎪⎪⎪⎪⎪⎩

1 1 1

1 1 1

1 1 1

⎫⎪⎪⎪⎪⎪⎭

**(g)**

⎧⎪⎪⎪⎪⎪⎩

4 2 −2

2 1 −1

−2 −1 1

⎫⎪⎪⎪⎪⎪⎭

- Show that the diagonal entries of a Hermitian

matrix must be real.

- Let
*A*be an*n*×*n*Hermitian matrix and let**x**be a

vector in C*n*. Show that if *c *= **x***H**A***x**, then *c *is real.

- Let
*A*be an Hermitian matrix and let*B*=*iA*. Show

that *B *is skew Hermitian.

- Let
*A*and*C*be matrices in C*m*×*n*and let*B*∈ C*n*×*r*.

Prove each of the following rules:

**(a) **(*A**H*)*H *= *A*

**(b) **(*α**A *+ *β**C*)*H *= *α**A**H *+ *β**C**H*

**(c) **(*AB*)*H *= *B**H**A**H*

- Let
*A*and*B*be Hermitian matrices. Answer*true*or

*false *for each of the statements that follow. In each

case, explain or prove your answer.

**(a) **The eigenvalues of *AB *are all real.

**(b) **The eigenvalues of *ABA *are all real.

- Show that

**z**,**w**_ = **w***H***z**

defines an inner product on C*n*.

- Let
**x**,**y**, and**z**be vectors in C*n*and let*α*and*β*be

complex scalars. Show that

**z**, *α***x **+ *β***y**_ = *α ***z**, **x**_ + *β ***z**, **y**_

- Let {
**u**1,*. . .*,**u***n*} be an orthonormal basis for a

complex inner product space *V*, and let

**z **= *a*1**u**1 + *a*2**u**2 +· · ·+*a**n***u***n*

**w **= *b*1**u**1 + *b*2**u**2 +· · ·+*b**n***u***n*

Show that

**z**,**w**_ =

*n*

*i*=1

*b**i**a**i*

- Given that

*A *=

⎧⎪⎪⎪⎪⎪⎩

4 0 0

0 1 *i*

0 −*i *1

⎫⎪⎪⎪⎪⎪⎭

find a matrix *B *such that *B**H**B *= *A*.

- Let
*U*be a unitary matrix. Prove that

**(a) ***U *is normal.

**(b) **_*U***x**_ = _**x**_ for all **x **∈ C*n*.

**(c) **if *λ *is an eigenvalue of *U*, then |*λ*| = 1.

- Let
**u**be a unit vector in C*n*and define*U*=*I*−

2**uu***H*. Show that *U *is both unitary and Hermitian

and, consequently, is its own inverse.

- Show that if a matrix
*U*is both unitary and Hermitian

then any eigenvalue of *U *must equal either

1 or −1.

- Let
*A*be a 2 × 2 matrix with Schur decomposition

*UTU**H *and suppose that *t*12 _= 0. Show that

**(a) **the eigenvalues of *A *are *λ*1 = *t*11 and *λ*2 = *t*22.

**(b) u**1 is an eigenvector of *A *belonging to *λ*1 = *t*11.

**(c) u**2 is not an eigenvector of *A *belonging to

*λ*2 = *t*22.

- Let
*A*be a 5 × 5 matrix with real entries. Let*A*=

*QTQ**T *be the real Schur decomposition of *A*, where

*T *is a block matrix of the form given in equation

(2). What are the possible block structures for

*T *in each of the following cases?

**(a) **All of the eigenvalues of *A *are real.

**(b) ***A *has three real eigenvalues and two complex

eigenvalues.

**(c) ***A *has one real eigenvalue and four complex

eigenvalues.

- Let
*A*be a*n*×*n*matrix with Schur decomposition

*UTU**H*. Show that if the diagonal entries of *T *are

all distinct, then there is an upper triangular matrix

*R *such that *X *= *UR *diagonalizes *A*.

- Show that
*M*=*A*+*iB*(where*A*and*B*are real

matrices) is skew Hermitian if and only if *A *is skew

symmetric and *B *is symmetric.

- Show that if
*A*is skew Hermitian and*λ*is an eigenvalue

of *A *then *λ *is purely imaginary (i.e., *λ *= *bi*,

where *b *is real).

- Show that if
*A*is a normal matrix then each of the

following matrices must also be normal.

**(a) ***A**H ***(b) ***I *+ *A ***(c) ***A*2

- Let
*A*be a real 2 × 2 matrix with the property that

*a*21*a*12 *> *0, and let

*r *=

_

*a*21*/**a*12 and *S *=

⎧⎪⎩

*r *0

0 1

⎫⎪⎭

Compute *B *= *SAS*−1. What can you conclude

about the eigenvalues and eigenvectors of *B*? What

can you conclude about the eigenvalues and eigenvectors

of *A*? Explain.

- Let
*p*(*x*) = −*x*3 +*cx*2 + (*c*+ 3)*x*+ 1, where*c*is a

real number. Let

*C *=

⎧⎪⎪⎪⎪⎪⎩

*c c*+3 1

1 0 0

0 1 0

⎫⎪⎪⎪⎪⎪⎭

and let

*A *=

⎧⎪⎪⎪⎪⎪⎩

−1 2 −*c *− 3

1 −1 *c *+ 2

−1 1 −*c *− 1

⎫⎪⎪⎪⎪⎪⎭

**(a) **Compute *A*−1*CA*.

**(b) **Show that *C *is the companion matrix of *p*(*x*)

and use the result from part (a) to prove that

*p*(*x*) will have only real roots regardless of the

value of *c*.

- Let
*A*be a Hermitian matrix with eigenvalues

*λ*1, *. . . *, *λ**n *and orthonormal eigenvectors

**u**1, *. . . *, **u***n*. Show that

*A *= *λ*1**u**1**u***H*1

+ *λ*2**u**2**u***H*2

+· · ·+*λ**n***u***n***u***Hn*

- Let

*A *=

⎧⎪⎩

0 1

1 0

⎫⎪⎭

Write *A *as a sum *λ*1**u**1**u***T*1

+ *λ*2**u**2**u***T*2

, where *λ*1 and

*λ*2 are eigenvalues and **u**1 and **u**2 are orthonormal

eigenvectors.

- Let
*A*be a Hermitian matrix with eigenvalues*λ*1 ≥

*λ*2 ≥ · · · ≥ *λ**n *and orthonormal eigenvectors

**u**1, *. . . *, **u***n*. For any nonzero vector **x **in R*n*, the

*Rayleigh quotient **ρ*(**x**) is defined by

*ρ*(**x**) =

*A***x**, **x**_

**x**, **x**_

= **x***H**A***x**

**x***H***x**

**(a) **If **x **= *c*1**u**1 +· · ·+*c**n***u***n*, show that

*ρ*(**x**) =

|*c*1|2*λ*1 + |*c*2|2*λ*2 +· · ·+|*c**n*|2*λ**n*

_**c**_2

**(b) **Show that

*λ**n *≤ *ρ*(**x**) ≤ *λ*1

**(c) **Show that

max

**x**_=**0**

*ρ*(**x**) = *λ*1 and min

**x**_=**0**

*ρ*(**x**) = *λ**n*

- Given
*A*∈ R*m*×*m*,*B*∈ R*n*×*n*,*C*∈ R*m*×*n*, the

equation

*AX *− *XB *= *C *(3)

is known as *Sylvester’s equation*. An *m *× *n *matrix

*X *is said to be a solution if it satisfies (3).

**(a) **Show that if *B *has Schur decomposition *B *=

*UTU**H*, then Sylvester’s equation can be transformed

into an equation of the form *AY*−*YT *=

*G*, where *Y *= *XU *and *G *= *CU*.

**(b) **Show that

(*A *− *t*11*I*)**y**1

= **g**1

(*A *− *t**jj**I*)**y***j*

= **g***j*

+

*j*−1

*i*=1

*t**ij***y***j*, *j *= 2, *. . . *, *n*

**(c) **Show that if *A *and *B *have no common eigenvalues,

then Sylvester’s equation has a

solution.

SECTION 6.5 EXERCISES

- Show that
*A*and*A**T*have the same nonzero singular

values. How are their singular value decompositions

related?

- Use the method of Example 1 to find the singular

value decomposition of each of the following

matrices:

**(a)**

⎧⎪⎩

1 1

2 2

⎫⎪⎭

**(b)**

⎧⎪⎩

2 −2

1 2

⎫⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 3

3 1

0 0

0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

2 0 0

0 2 1

0 1 2

0 0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

- For each of the matrices in Exercise 2:

**(a) **determine the rank.

**(b) **find the closest (with respect to the Frobenius

norm) matrix of rank 1.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

−2 8 20

14 19 10

2 −2 1

⎫⎪⎪⎪⎪⎪

⎪⎪⎪⎪⎪⎭

=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

3

5

−4

5 0

4

5

3

5 0

0 0 1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

30 0 0

0 15 0

0 0 3

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

3

2

3

2

3

2

3

1

3

−2

3

2

3

−2

3

1

3

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

Find the closest (with respect to the Frobenius norm) matrices of rank 1 and rank 2 to *A*.

- The matrix

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

2 5 4

6 3 0

6 3 0

2 5 4

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

has singular value decomposition

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

2

1

2

1

2

1

2

1

2

−1

2

−1

2

1

2

1

2

−1

2

1

2

−1

2

1

2

1

2

−1

2

−1

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

12 0 0

0 6 0

0 0 0

0 0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

2

3

2

3

1

3

−2

3

1

3

2

3

1

3

−2

3

2

3

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Use the singular value decomposition to find orthonormal bases for *R*(*A**T*) and *N*(*A*).

**(b) **Use the singular value decomposition to find orthonormal bases for *R*(*A*) and *N*(*A**T *).

- Prove that if
*A*is a symmetric matrix with eigenvalues

*λ*1, *λ*2, *. . . *, *λ**n*, then the singular values of *A *are

|*λ*1|, |*λ*2|, *. . . *, |*λ**n*|.

- Let
*A*be an*m*×*n*matrix with singular value decomposition

*UV**T *, and suppose that *A *has rank

*r*, where *r **< **n*. Show that {**v**1, *. . . *, **v***r*} is an

orthonormal basis for *R*(*A**T *).

- Let
*A*be an*n*×*n*matrix. Show that*A**T**A*and*AA**T*

are similar.

- Let
*A*be an*n*×*n*matrix with singular values*σ*1,

*σ*2, . . . , *σ**n *and eigenvalues *λ*1, *λ*2, . . . , *λ**n*. Show

that

|*λ*1*λ*2 · · · *λ**n*| = *σ*1*σ*2 · · · *σ**n*

- Let
*A*be an*n*×*n*matrix with singular value

decomposition *UV**T *and let

*B *=

⎧⎪⎩

*O A**T*

*A O*

⎫⎪⎭

Show that if

**x***i *=

⎧⎪⎩

**v***i*

**u***i*

⎫⎪⎭

, **y***i*

=

⎧⎪⎩

−**v***i*

**u***i*

⎫⎪⎭

, *i *= 1, *. . . *, *n*

then the **x***i*’s and **y***i*’s are eigenvectors of *B*. How

do the eigenvalues of *B *relate to the singular values

of *A*?

- Show that if
*σ*is a singular value of*A*then there

exists a nonzero vector **x **such that

*σ *=

_*A***x**_2

_**x**_2

- Let
*A*be an*m*×*n*matrix of rank*n*with singular

value decomposition *UV**T*. Let

+ denote the

*n *× *m *matrix ⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

*σ*1

1

*σ*2

. . .

1

*σ**n*

*O*

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

and define *A*+ = *V*

+*U**T *. Show that ˆ**x **= *A*+**b**

satisfies the normal equations *A**T**A***x **= *A**T***b**.

- Let
*A*+ be defined as in Exercise 12 and let*P*=

*AA*+. Show that *P*2 = *P *and *P**T *= *P*.

SECTION 6.6 EXERCISES

- Find the matrix associated with each of the following

quadratic forms:

**(a) **3*x*2 − 5*xy *+ *y*2

**(b) **2*x*2 + 3*y*2 + *z*2 + *xy *− 2*xz *+ 3*yz*

**(c) ***x*2 + 2*y*2 + *z*2 + *xy *− 2*xz *+ 3*yz*

- Reorder the eigenvalues in Example 2 so that
*λ*1 =

4 and *λ*2 = 2 and rework the example. In what

quadrants will the positive *x*_ and *y*_ axes lie? Sketch

the graph and compare it to Figure 6.6.3.

- In each of the following, (i) find a suitable change

of coordinates (i.e., a rotation and/or a translation)

so that the resulting conic section is in standard

form, (ii) identify the curve, and (iii) sketch the

graph.

**(a) ***x*2 + *xy *+ *y*2 − 6 = 0

**(b) **3*x*2 + 8*xy *+ 3*y*2 + 28 = 0

**(c) **−3*x*2 + 6*xy *+ 5*y*2 − 24 = 0

**(d) ***x*2 + 2*xy *+ *y*2 + 3*x *+ *y *− 1 = 0

- Let
*λ*1 and*λ*2 be the eigenvalues of

*A *=

⎧⎪⎩

*a b*

*b c*

⎫⎪⎭

What kind of conic section will the equation

*ax*2 + 2*bxy *+ *cy*2 = 1

represent if *λ*1*λ*2 *< *0? Explain.

- Let
*A*be a symmetric 2 × 2 matrix and let*α*be a

nonzero scalar for which the equation **x***T**A***x **= *α*

is consistent. Show that the corresponding conic

section will be nondegenerate if and only if *A *is

nonsingular.

- Which of the matrices that follow are positive

definite? Negative definite? Indefinite?

**(a)**

⎧⎪⎩

3 2

2 2

⎫⎪⎭

**(b)**

⎧⎪⎩

3 4

4 1

⎫⎪⎭

**(c)**

⎧⎪⎪⎩

3

√

√ 2

2 4

⎫⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎩

−2 0 1

0 −1 0

1 0 −2

⎫⎪⎪⎪⎪⎪⎭

**(e)**

⎧⎪⎪⎪⎪⎪⎩

1 2 1

2 1 1

1 1 2

⎫⎪⎪⎪⎪⎪⎭

**(f)**

⎧⎪⎪⎪⎪⎪⎩

2 0 0

0 5 3

0 3 5

⎫⎪⎪⎪⎪⎪⎭

- For each of the following functions, determine

whether the given stationary point corresponds to

a local minimum, local maximum, or saddle point:

**(a) ***f *(*x*, *y*) = 3*x*2 − *xy *+ *y*2 (0, 0)

**(b) ***f *(*x*, *y*) = sin *x *+ *y*3 + 3*xy *+ 2*x *− 3*y *(0,−1)

**(c) ***f *(*x*, *y*) = 1

3 *x*3 − 1

3 *y*3 + 3*xy *+ 2*x *− 2*y *(1,−1)

**(d) ***f *(*x*, *y*) = *y*

*x*2

+ *x*

*y*2

+ *xy *(1, 1)

**(e) ***f *(*x*, *y*, *z*) = *x*3 + *xyz *+ *y*2 − 3*x *(1, 0, 0)

**(f) ***f *(*x*, *y*, *z*) = −1

4 (*x*−4+*y*−4+*z*−4)+*yz*−*x*−2*y*−

2*z *(1, 1, 1)

- Show that if
*A*is symmetric positive definite, then

det(*A*) *> *0. Give an example of a 2×2 matrix with

positive determinant that is not positive definite.

- Show that if
*A*is a symmetric positive definite matrix,

then *A *is nonsingular and *A*−1 is also positive

definite.

- Let
*A*be a singular*n*×*n*matrix. Show that*A**T**A*is

positive semidefinite, but not positive definite.

- Let
*A*be a symmetric*n*×*n*matrix with eigenvalues

*λ*1, *. . . *, *λ**n*. Show that there exists an orthonormal

set of vectors {**x**1, *. . . *, **x***n*} such that

**x***T**A***x **=

*n*

*i*=1

*λ**i*

**x***T***x***i*

_2

for each **x **∈ R*n*.

- Let
*A*be a symmetric positive definite matrix. Show

that the diagonal elements of *A *must all be positive

- Let
*A*be a symmetric positive definite*n*×*n*matrix

and let *S *be a nonsingular *n *× *n *matrix. Show that

*S**T**AS *is positive definite.

- Let
*A*be a symmetric positive definite*n*×*n*matrix.

Show that *A *can be factored into a product *QQ**T *,

where *Q *is an *n *× *n *matrix whose columns are

mutually orthogonal. [*Hint*: See Corollary 6.4.7.]

SECTION 6.7 EXERCISES

- For each of the following matrices, compute the determinants

of all the leading principal submatrices

and use them to determine whether the matrix is

positive definite:

**(a)**

⎧⎪⎩

2 −1

−1 2

⎫⎪⎭

**(b)**

⎧⎪⎩

3 4

4 2

⎫⎪⎭ **(**

**c)**

⎧⎪⎪⎪⎪⎪⎩

6 4 −2

4 5 3

−2 3 6

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎩

4 2 1

2 3 −2

1 −2 5

⎫⎪⎪⎪⎪⎪⎭

- Let
*A*be a 3×3 symmetric positive definite matrix

and suppose that det(*A*1) = 3, det(*A*2) = 6, and

det(*A*3) = 8. What would the pivot elements be

in the reduction of *A *to triangular form, assuming

that only row operation III is used in the reduction

process?

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

2 −1 0 0

−1 2 −1 0

0 −1 2 −1

0 0 −1 2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Compute the *LU *factorization of *A*.

**(b) **Explain why *A *must be positive definite.

- For each of the following, factor the given matrix

into a product *LDL**T*, where *L *is lower triangular

with 1’s on the diagonal and *D *is a diagonal matrix:

**(a)**

⎧⎪⎩

4 2

2 10

⎫⎪⎭

**(b)**

⎧⎪⎩

9 −3

−3 2

⎫⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

16 8 4

8 6 0

4 0 7

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎩

9 3 −6

3 4 1

−6 1 9

⎫⎪⎪⎪⎪⎪⎭

- Find the Cholesky decomposition
*LL**T*for each of

the matrices in Exercise 4.

- Let
*A*be an*n*×*n*symmetric positive definite matrix.

For each **x**, **y **∈ R*n*, define

**x**, **y**_ = **x***T**A***y**

Show that , _ defines an inner product on R*n*.

- Prove each of the following:

**(a) **If *U *is a unit upper triangular matrix, then

*U *is nonsingular and *U*−1 is also unit upper

triangular.

**(b) **If *U*1 and *U*2 are both unit upper triangular

matrices, then the product *U*1*U*2 is also a unit

upper triangular matrix.

- Let
*A*be a nonsingular*n*×*n*matrix, and suppose

that *A *= *L*1*D*1*U*1 = *L*2*D*2*U*2, where *L*1 and *L*2 are

lower triangular, *D*1 and *D*2 are diagonal, *U*1 and *U*2

are upper triangular, and *L*1, *L*2, *U*1, *U*2 all have 1’s

along the diagonal. Show that *L*1 = *L*2, *D*1 = *D*2,

and *U*1 = *U*2. [*Hint*: *L*−1

2 is lower triangular and

*U*−1

1 is upper triangular. Compare both sides of the

equation *D*−1

2 *L*−1

2 *L*1*D*1 = *U*2*U*−1

1 .]

- Let
*A*be a symmetric positive definite matrix with

Cholesky decomposition *A *= *LL**T *= *R**T**R*. Prove

that the lower triangular matrix *L *(or that the upper

triangular matrix *R*) in the factorization is unique.

- Let
*A*be an*m*×*n*matrix with rank*n*. Show that

the matrix *A**T**A *is symmetric positive definite.

- Let
*A*be an*m*×*n*matrix with rank*n*and let*QR*be

the factorization obtained when the Gram–Schmidt

process is applied to the column vectors of *A*.

Show that if *A**T**A *has Cholesky factorization *R**T*1

*R*1,

then *R*1 = *R*. Thus the upper triangular factors in

the Gram–Schmidt QR factorization of *A *and the

Cholesky decomposition of *A**T**A *are identical.

- Let
*A*be a symmetric positive definite matrix and

let *Q *be an orthogonal diagonalizing matrix. Use

the factorization *A *= *QDQ**T *to find a nonsingular

matrix *B *such that *B**T**B *= *A*.

- Let
*A*be a symmetric*n*×*n*matrix. Show that*e**A*is

symmetric and positive definite.

- Show that if
*B*is a symmetric nonsingular matrix,

then *B*2 is positive definite.

- Let

*A *=

⎧⎪⎪⎪⎪⎩

1 −1

2

−1

2 1

⎫⎪⎪⎪⎪⎭

and *B *=

⎧⎪⎩

1 −1

0 1

⎫⎪⎭

**(a) **Show that *A *is positive definite and that **x***T**A***x **=

**x***T**B***x **for all **x **∈ R2.

**(b) **Show that *B *is positive definite, but *B*2 is not

positive definite.

- Let
*A*be an*n*×*n*symmetric negative definite

matrix.

**(a) **What will the sign of det(*A*) be if *n *is even? If

*n *is odd?

**(b) **Show that the leading principal submatrices of

*A *are negative definite.

**(c) **Show that the determinants of the leading principal

submatrices of *A *alternate in sign.

- Let
*A*be a symmetric positive definite*n*×*n*matrix.

**(a) **If *k **< **n*, then the leading principal submatrices

*A**k *and *A**k*+1 are both positive definite and, consequently,

have Cholesky factorizations *L**k**L**T*

*k*

and *L**k*+1*L**T*

*k*+1. If *A**k*+1 is expressed in the form

*A**k*+1 =

⎧⎪⎪⎪⎩

*A**k ***y***k*

**y***Tk*

*β**k*

⎫⎪⎪⎪⎭

where **y***k*

∈ R*k *and *β**k *is a scalar, show that

*L**k*+1 is of the form

*L**k*+1 =

⎧⎪⎪⎪⎩

*L**k ***0**

**x***Tk*

*α**k*

⎫⎪⎪⎪⎭

and determine **x***k *and *α**k *in terms of *L**k*, **y***k*,

and *β**k*.

**(b) **The leading principal submatrix *A*1 has

Cholesky decomposition *L*1*L**T*

1, where *L*1 =

(

√

*a*11 ). Explain how part (a) can be used to

compute successively the Cholesky factorizations

of *A*2, *. . . *, *A**n*. Devise an algorithm that

computes *L*2, *L*3, *. . . *, *L**n *in a single loop. Since

*A *= *A**n*, the Cholesky decomposition of *A*

will be *L**n**L**T*

*n *. (This algorithm is efficient in

that it uses approximately half the amount of

arithmetic that would generally be necessary to

compute an *LU *factorization.)

SECTION 6.8 EXERCISES

- Find the eigenvalues of each of the following

matrices and verify that conditions (i), (ii), and (iii)

of Theorem 6.8.1 hold:

**(a)**

⎧⎪⎩

2 3

2 1

⎫⎪⎭

**(b)**

⎧⎪⎩

4 2

2 7

⎫⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

1 2 4

2 4 1

1 2 4

⎫⎪⎪⎪⎪⎪⎭

- Find the eigenvalues of each of the following

matrices and verify that conditions (i) and (ii) of

Theorem 6.8.2 hold:

**(a)**

⎧⎪⎩

2 3

1 0

⎫⎪⎭

**(b)**

⎧⎪⎩

0 2

2 0

⎫⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

0 0 8

1 0 0

0 1 0

⎫⎪⎪⎪⎪⎪⎭

- Find the output vector
**x**in the open version of the

Leontief input–output model if

*A *=

⎧⎪⎪⎪⎪⎪⎩

0.2 0.4 0.4

0.4 0.2 0.2

0.0 0.2 0.2

⎫⎪⎪⎪⎪⎪⎭

and **d **=

⎧⎪⎪⎪⎪⎪⎩

16,000

8,000

24,000

⎫⎪⎪⎪⎪⎪⎭

- Consider the closed version of the Leontief input–

output model with input matrix

*A *=

⎧⎪⎪⎪⎪⎪⎩

0.5 0.4 0.1

0.5 0.0 0.5

0.0 0.6 0.4

⎫⎪⎪⎪⎪⎪⎭

If **x **= (*x*1, *x*2, *x*3)*T *is any output vector for this

model, how are the coordinates *x*1, *x*2, and *x*3

related?

- Prove: If
*A**m*=*O*for some positive integer*m*, then

*I *− *A *is nonsingular.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

0 1 1

0 −1 1

0 −1 1

⎫⎪⎪⎪⎪⎪⎭

**(a) **Compute (*I *− *A*)−1.

**(b) **Compute *A*2 and *A*3. Verify that

(*I *− *A*)−1 = *I *+ *A *+ *A*2.

- Which of the matrices that follow are reducible?

For each reducible matrix, find a permutation matrix

*P *such that *PAP**T *is of the form ⎧⎪⎪⎩

*B O*

*X C*

⎫⎪⎪⎭

where *B *and *C *are square matrices.

**(a)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 1 1 0

1 1 1 0

1 1 1 1

1 1 1 1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(b)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 0 1 1

1 1 1 1

1 0 1 1

1 0 1 1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 0 1 0 0

0 1 1 1 1

1 0 1 0 0

1 1 0 1 1

1 1 1 1 1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭ **(**

**d)**

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 1 1 1 1

1 1 0 0 1

1 1 1 1 1

1 1 0 0 1

1 1 0 0 1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

- Let
*A*be a nonnegative irreducible 3 × 3 matrix

whose eigenvalues satisfy *λ*1 = 2 = |*λ*2| = |*λ*3|.

Determine *λ*2 and *λ*3.

- Let

*A *=

⎧⎪⎪⎩

*B O*

*O C*

⎫⎪⎪⎭

where *B *and *C *are square matrices.

**(a) **If *λ *is an eigenvalue of *B *with eigenvector

**x **= (*x*1, *. . . *, *x**k*)*T *, show that *λ *is also

an eigenvalue of *A *with eigenvector ˜**x **=

(*x*1, *. . . *, *x**k*, 0, *. . . *, 0)*T *.

**(b) **If *B *and *C *are positive matrices, show that *A*

has a positive real eigenvalue *r *with the property

that |*λ*| *< **r *for any eigenvalue *λ *_= *r*.

Show also that the multiplicity of *r *is at most 2

and that *r *has a nonnegative eigenvector.

**(c) **If *B *= *C*, show that the eigenvalue *r *in part (b)

has multiplicity 2 and possesses a positive

eigenvector.

- Prove that a 2 ×2 matrix
*A*is reducible if and only

if *a*12*a*21 = 0.

- Prove the Frobenius theorem in the case where
*A*is

a 2 ×2 matrix.

- We can show that, for an
*n*×*n*stochastic matrix,

*λ*1 = 1 is an eigenvalue and the remaining

eigenvalues must satisfy

|*λ**j*| ≤ 1 *j *= 2, *. . . *, *n*

(See Exercise 24 of Chapter 7, Section 4.) Show

that if *A *is an *n *× *n *stochastic matrix with the property

that *A**k *is a positive matrix for some positive

integer *k*, then

|*λ**j*| *< *1 *j *= 2, *. . . *, *n*

- Let
*A*be an*n*×*n*positive stochastic matrix with

dominant eigenvalue *λ*1 = 1 and linearly independent

eigenvectors **x**1, **x**2, *. . . *, **x***n*, and let **y**0 be an

initial probability vector for a Markov chain

**y**0, **y**1

= *A***y**0, **y**2

= *A***y**1, *. . .*

**(a) **Show that *λ*1 = 1 has a positive eigenvector **x**1.

**(b) **Show that _**y***j*

_1 = 1, *j *= 0, 1, *. . .*.

**(c) **Show that if

**y**0

= *c*1**x**1 + *c*2**x**2 +· · ·+*c**n***x***n*

then the component *c*1 in the direction of the

positive eigenvector **x**1 must be nonzero.

**(d) **Show that the state vectors **y***j *of the Markov

chain converge to a steady-state vector.

**(e) **Show that

*c*1 = 1

_**x**1_1

and hence the steady-state vector is independent

of the initial probability vector **y**0.

- Would the results of parts (c) and (d) in Exercise 13

be valid if the stochastic matrix *A *was not a positive

matrix? Answer this same question in the case

when *A *is a nonnegative stochastic matrix and, for

some positive integer *k*, the matrix *A**k *is positive.

Explain your answers.

- A management student received fellowship offers

from four universities and now must choose which

one to accept. The student uses the analytic hierarchy

process to decide among the universities and

bases the decision process on the following four

criteria:

(i) financial matters—tuition and scholarships

(ii) the reputation of the university

(iii) social life at the university

(iv) geography—how desirable is the location

of the university

In order to weigh the criteria the student decides

that finance and reputation are equally important

and both are 4 times as important as social life and

6 times as important as geography. The student also

rates social life twice as important as geography.

**(a) **Determine a reciprocal comparison matrix *C*

based on the given judgments of the relative

importance of the 4 criteria.

**(b) **Show that the matrix *C *is not consistent.

**(c) **Make the problem consistent by changing the

relative importance of one pair of criteria and

determine a new comparison matrix *C*1 for the

consistent problem.

**(d) **Find an eigenvector belonging to the dominant

eigenvalue of *C*1 and use it to determine a

weight vector for the decision criteria.

**MATLAB EXERCISES**

Visualizing Eigenvalues

*MATLAB has a utility for visualizing the actions of linear*

*operators that map the plane into itself. The utility is*

*invoked using the command ***eigshow***. This command*

*opens a figure window that shows a unit vector ***x ***and*

*also A***x***, the image of ***x ***under A. The matrix A can be*

*specified as an input argument of the ***eigshow ***command*

*or selected from the menu at the top of the figure*

*window. To see the effect of the operator A on other*

*unit vectors, point your mouse to the tip of the vector ***x**

*and use it to drag the vector ***x ***around the unit circle in*

*a counterclockwise direction. As ***x ***moves, you will see*

*how its image A***x ***changes. In this exercise, we will use*

*the ***eigshow ***utility to investigate the eigenvalues and*

*eigenvectors of the matrices in the ***eigshow ***menu.*

- The top matrix on the menu is the diagonal matrix

*A *=

⎧⎪⎪⎪⎪⎩ 5

4 0

0 3

4

⎫⎪⎪⎪⎪⎭

Initially, when you select this matrix, the vectors

**x **and *A***x **should both be aligned along the positive

*x*-axis. What information about an eigenvalue–

eigenvector pair is apparent from the initial figure

positions? Explain. Rotate **x **counterclockwise until

**x **and *A***x **are parallel, that is, until they both

lie along the same line through the origin. What

can you conclude about the second eigenvalue–

eigenvector pair? Repeat this experiment with the

second matrix. How can you determine the eigenvalues

and eigenvectors of a 2 × 2 diagonal matrix

by inspection without doing any computations?

Does this also work for 3 × 3 diagonal matrices?

Explain.

- The third matrix on the menu is just the identity

matrix *I*. How do **x **and *I***x **compare geometrically

as you rotate **x **around the unit circle? What can you

conclude about the eigenvalues and eigenvectors in

this case?

- The fourth matrix has 0’s on the diagonal and 1’s

in the off-diagonal positions. Rotate the vector **x**

around the unit circle and note when **x **and *A***x**

are parallel. On the basis on these observations,

determine the eigenvalues and the corresponding

unit eigenvectors. Check your answers by multiplying

the matrix times the eigenvector to verify that

*A***x **= *λ***x**.

- The next matrix in the
**eigshow**menu looks the

same as the previous ones except that the (2, 1)

entry has been changed to −1. Rotate the vector

**x **completely around the unit circle. Are **x **and *A***x**

ever parallel? Does *A *have any real eigenvectors?

What can you conclude about the nature of the

eigenvalues and eigenvectors of this matrix?

- Investigate the next three matrices on the menu

(the sixth, seventh and eighth). In each case, try to

estimate geometrically the eigenvalues and eigenvectors

and make your guesses for the eigenvalues

consistent with the trace of the matrix. Use MATLAB

to compute the eigenvalues and eigenvectors

of the sixth matrix by setting

[*X*,*D*] = **eig**([0.25, 0.75 ; 1, 0.50 ])

The column vectors of *X *are the eigenvectors of the

matrix and the diagonal entries of *D *are the eigenvalues.

Check the eigenvalues and eigenvectors of

the other two matrices in the same way.

- Investigate the ninth matrix on the menu. What can

you conclude about the nature of its eigenvalues

and eigenvectors? Check your conclusions by computing

the eigenvalues and eigenvectors with the

**eig **command.

- Investigate the next three matrices on the menu.

You should note that, for the last two of these

matrices, the two eigenvalues are equal. For each

matrix, how are the eigenvectors related? Use

MATLAB to compute the eigenvalues and eigenvectors

of these matrices.

- The last item on the
**eigshow**menu will generate

a random 2 × 2 matrix each time that it is invoked.

Try using the random matrix 10 times, and in each

case determine whether the eigenvalues are real.

What percentage of the 10 random matrices had

real eigenvalues? What is the likelihood that two

real eigenvalues of a random matrix will turn out to

be exactly equal? Explain.

Critical Loads for a Beam

- Consider the application relating to critical loads for

a beam from Section 6.1. For simplicity, we will assume

that the beam has length 1 and that its flexural

rigidity is also 1. Following the method described

in the application, if the interval [0, 1] is partitioned

into *n *subintervals, then the problem can be translated

into a matrix equation *A***y **= *λ***y**. The critical

load for the beam can be approximated by setting

*P *= *sn*2, where *s *is the smallest eigenvalue of *A*.

For *n *= 100, 200, 400, form the coefficient matrix

by setting

*D *= **diag**(**ones**(*n *− 1, 1), 1);

*A *= 2 ∗ **eye**(*n*) − *D *− *D*_;

In each case, determine the smallest eigenvalue of *A*

by setting

*s *= **min**(**eig**(*A*))

and then compute the corresponding approximation

to the critical load.

Diagonalizable and Defective Matrices

- Construct a symmetric matrix
*A*by setting

*A *= **round**(5 ∗ **rand**(6)); *A *= *A *+ *A*_

Compute the eigenvalues of *A *by setting

**e **= **eig**(*A*)

**(a) **The trace of *A *can be computed with the MATLAB

command **trace**(*A*), and the sum of

the eigenvalues of *A *can be computed with

the command **sum**(**e**). Compute both of these

quantities and compare the results. Use the

command **prod**(**e**) to compute the product of

the eigenvalues of *A *and compare the result

with **det**(*A*).

**(b) **Compute the eigenvectors of *A *by setting

[*X*,*D*] = **eig**(*A*). Use MATLAB to compute

*X*−1*AX *and compare the result with *D*. Compute

also *A*−1 and *XD*−1*X*−1 and compare the

results.

- Set

*A *= **ones**(10) + **eye**(10)

**(a) **What is the rank of *A *− *I*? Why must *λ *= 1

be an eigenvalue of multiplicity 9? Compute

the trace of *A *using the MATLAB function

**trace**. The remaining eigenvalue *λ*10 must

equal 11. Why? Explain. Compute the eigenvalues

of *A *by setting **e **= **eig**(*A*). Examine

the eigenvalues, using **format long**. How

many digits of accuracy are there in the computed

eigenvalues?

**(b) **TheMATLAB routine for computing eigenvalues

is based on the QR algorithm described in

Section 6 of Chapter 7. We can also compute

the eigenvalues of *A *by computing the roots of

its characteristic polynomial. To determine the

coefficients of the characteristic polynomial of

*A*, set **p **= **poly**(*A*). The characteristic polynomial

of *A *should have integer coefficients.

Why? Explain. If we set **p **= **round**(**p**), we

should end up with the exact coefficients of the

characteristic polynomial of *A*. Compute the

roots of **p **by setting

**r **= **roots**(**p**)

and display the results, using **format long**.

How many digits of accuracy are there in the

computed results? Which method of computing

eigenvalues is more accurate, using the

**eig **function or computing the roots of the

characteristic polynomial?

- Consider the matrices

*A *=

⎧⎪⎩

5 −3

3 −5

⎫⎪⎭

and *B *=

⎧⎪⎩

5 −3

3 5

⎫⎪⎭

Note that the two matrices are the same except for

their (2, 2) entries.

**(a) **Use MATLAB to compute the eigenvalues of

*A *and *B*. Do they have the same type of eigenvalues?

The eigenvalues of the matrices are the

roots of their characteristic polynomials. Use

the following MATLAB commands to form

the polynomials and plot their graphs on the

same axis system:

**p **= **poly**(*A*);

**q **= **poly**(*B*);

**x **= −8 : 0.1 : 8;

**z **= **zeros**(**size**(**x**));

**y **= **polyval**(**p**, **x**);

**w **= **polyval**(**q**, **x**);

**plot**(**x**, **y**, **x**,**w**, **x**, **z**)

**hold on**

The **hold on **command is used so that subsequent

plots in part (b) will be added to the

current figure. How can you use the graph

to estimate the eigenvalues of *A*? What does

the graph tell you about the eigenvalues of *B*?

Explain.

**(b) **To see how the eigenvalues change as the (2, 2)

entry changes, let us construct a matrix *C *with

a variable (2, 2) entry. Set

*t *= **sym**(_*t*_) *C *= [5,−3; 3, *t *− 5]

As *t *goes from 0 to 10, the (2, 2) entries

of these matrices go from −5 to 5. Use the

following MATLAB commands to plot the

graphs of the characteristic polynomials for

the intermediate matrices corresponding to *t *=

1, 2, *. . . *, 9:.

**p **= **poly**(*C*)

**for ***j *= 1 : 9

**s **= **subs**(**p**, *t*, *j*);

**ezplot**(**s**, [−10, 10])

**axis**([−10, 10,−20, 220])

**pause**(2)

**end**

Which of these intermediate matrices

have real eigenvalues and which have complex

eigenvalues? The characteristic polynomial of

the symbolic matrix *C *is a quadratic polynomial

whose coefficients are functions of *t*.

To find exactly where the eigenvalues change

from real to complex, write the discriminant of

the quadratic as a function of *t *and then find

its roots. One root should be in the interval

(0, 10). Plug that value of *t *back into the matrix

*C *and determine the eigenvalues of the matrix.

Explain how these results correspond to your

graph. Solve for the eigenvectors by hand. Is

the matrix diagonalizable?

- Set

*B *= **toeplitz**(0 : −1 : −3, 0 : 3)

The matrix *B *is not symmetric and hence it is not

guaranteed to be diagonalizable. Use MATLAB to

verify that the rank of *B *equals 2. Explain why 0

must be an eigenvalue of *B *and the corresponding

eigenspace must have dimension 2. Set [*X*,*D*] =

**eig**(*B*). Compute *X*−1*BX *and compare the result

with *D*. Compute also *XD*5*X*−1 and compare the

result with *B*5.

- Set

*C *= **triu**(**ones**(4), 1) + **diag**([1,−1],−2)

and

[*X*,*D*] = **eig**(*C*)

Compute *X*−1*CX *and compare the result with *D*.

Is *C *diagonalizable? Compute the rank of *X *and

the condition number of *X*. If the condition number

of *X *is large, the computed values for the eigenvalues

may not be accurate. Compute the reduced

row echelon form of *C*. Explain why 0 must be an

eigenvalue of *C *and the corresponding eigenspace

must have dimension 1. Use MATLAB to compute

*C*4. It should equal the zero matrix. Given that

*C*4 = *O*, what can you conclude about the actual

values of the other three eigenvalues of *C*? Explain.

Is *C *defective? Explain.

- Construct a defective matrix by setting

*A *= **ones**(6); *A *= *A*−**tril**(*A*)−**triu**(*A*, 2)

It is easily seen that *λ *= 0 is the only eigenvalue of

*A *and that its eigenspace is spanned by **e**1. Verify

that this is indeed the case by using MATLAB to

compute the eigenvalues and eigenvectors of *A*. Examine

the eigenvectors using **format long**. Are

the computed eigenvectors multiples of **e**1? Now

perform a similarity transformation on *A*. Set

*Q *= **orth**(**rand**(6)); and *B *= *Q*_∗*A*∗*Q*

If the computations had been done in exact arithmetic,

the matrix *B *would be similar to *A *and

hence defective. Use MATLAB to compute the eigenvalues

of *B *and a matrix *X *consisting of the

eigenvectors of *B*. Determine the rank of *X*. Is the

computed matrix *B *defective? Because of rounding

error, a more reasonable question to ask is whether

the computed matrix *B *is close to being defective

(i.e., are the column vectors of *X *close to being

linearly dependent?). To answer this question, use

MATLAB to compute **rcond**(*X*), the reciprocal of

the condition number of *X*. A value of **rcond **close

to zero indicates that *X *is nearly rank deficient.

- Generate a matrix
*A*by setting

*B *= [−1, −1; 1, 1 ],

*A *= [**zeros**(2), **eye**(2); **eye**(2), *B*]

**(a) **The matrix *A *should have eigenvalues *λ*1 = 1

and *λ*2 = −1. Use MATLAB to verify that

these are the correct eigenvalues by computing

the reduced row echelon forms of *A *− *I*

and *A *+ *I*. What are the dimensions of the

eigenspaces of *λ*1 and *λ*2?

**(b) **It is easily seen that **trace**(*A*) = 0 and

**det**(*A*) = 1. Verify these results in MATLAB.

Use the values of the trace and determinant to

prove that 1 and −1 are actually both double

eigenvalues. Is *A *defective? Explain.

**(c) **Set **e **= **eig**(*A*) and examine the eigenvalues

using **format long**. How many digits of accuracy

are there in the computed eigenvalues?

Set [*X*, *D*] = **eig**(*A*) and compute the condition

number of *X*. The log of the condition

number gives an estimate of how many digits

of accuracy are lost in the computation of the

eigenvalues of *A*.

**(d) **Compute the rank of *X*. Are the computed

eigenvectors linearly independent? Use MATLAB

to compute *X*−1*AX*. Does the computed

matrix *X *diagonalize *A*?

Application: Sex-Linked Genes

- Suppose that 10,000 men and 10,000 women settle

on an island in the Pacific that has been opened to

development. Suppose also that a medical study of

the settlers finds that 200 of the men are color blind

and only 9 of the women are color blind. Let *x*(1)

denote the proportion of genes for color blindness

in the male population and let *x*(2) be the proportion

for the female population. Assume that *x*(1)

is equal to the proportion of color-blind males and

that *x*(2)2 is equal to the proportion of color-blind

females. Determine *x*(1) and *x*(2) and enter them

in MATLAB as a column vector **x**. Enter also the

matrix *A *from Application 3 of Section 6.3. Set

MATLAB to **format long**, and use the matrix

*A *to compute the proportions of genes for color

blindness for each sex in generations 5, 10, 20, and

- What are the limiting percentages of genes for

color blindness for this population? In the long run,

what percentage of males and what percentage of

females will be color blind?

Similarity

- Set

*S *= **round**(10 ∗ **rand**(5));

*S *= **triu**(*S*, 1) + **eye**(5)

*S *= *S*_ ∗ *S*

*T *= **inv**(*S*)

**(a) **The exact inverse of *S *should have integer

entries. Why? Explain. Check the entries of

*T *using **format long**. Round the entries

of *T *to the nearest integer by setting *T *=

**round**(*T*). Compute *T *∗ *S *and compare with

**eye**(5).

**(b) **Set

*A *= **triu**(**ones**(5), 1) + **diag**(1 : 5),

*B *= *S *∗ *A *∗ *T*

The matrices *A *and *B *both have the eigenvalues

1, 2, 3, 4, and 5. Use MATLAB to compute

the eigenvalues of *B*. How many digits of accuracy

are there in the computed eigenvalues?

Use MATLAB to compute and compare each

of the following:

**(i) **det(*A*) and det(*B*)

**(ii) ****trace**(*A*) and **trace**(*B*)

**(iii) ***SA*2*T *and *B*2

**(iv) ***SA*−1*T *and *B*−1

Hermitian Matrices

- Construct a complex Hermitian matrix by setting

*j *= **sqrt**(−1);

*A *= **rand**(5) + *j *∗ **rand**(5);

*A *= (*A *+ *A*_)*/*2

**(a) **The eigenvalues of *A *should be real. Why?

Compute the eigenvalues and examine your

results, using **format long**. Are the computed

eigenvalues real? Compute also the eigenvectors

by setting

[*X*,*D*] = **eig**(*A*)

What type of matrix would you expect *X *to

be? Use the MATLAB command *X*_ ∗ *X *to

compute *X**H**X*. Do the results agree with your

expectations?

**(b) **Set

*E *= *D*+*j *∗**eye**(5) and *B *= *X *∗*E**/**X*

What type of matrix would you expect *B *to be?

Use MATLAB to compute *B**H**B *and *BB**H*. How

do these two matrices compare?

Visualizing the Singular Value Decomposition

In some of the earlier exercises we used MATLAB’s

**eigshow **command to look at geometric interpretations

of the eigenvalues and eigenvectors of 2 × 2

matrices. The eigshow facility also has an **svdshow**

mode that we can use to visualize the singular values

and singular vectors of a nonsingular 2 × 2 matrix. Before

using the **svdshow **facility, we establish some

basic relations between the right and left singular vectors.

- Let
*A*be a nonsingular 2 × 2 matrix with singular

value decomposition *A *= *USV**T *and singular values

*s*1 = *s*11 and *s*2 = *s*22. Explain why each of the

following are true.

**(a) ***AV *= *US*

**(b) ***A***v**1 = *s*1**u**1 and *A***v**2 = *s*2**u**2.

**(c) v**1 and **v**2 are orthogonal unit vectors and the

images *A***v**1 and *A***v**2 are also orthogonal.

**(d) **_*A***v**1_ = *s*1 and _*A***v**2_ = *s*2.

- Set

*A *= [1, 1; 0.5,−0.5]

and use MATLAB to verify each of statements (a)-

(d) in Exercise 20. Use the command **eigshow**(*A*)

to apply the eigshow utility to the matrix *A*. Click

on the **eig/(svd) **button to switch into the

**svdshow **mode. The display in the figure window

should show a pair of orthogonal vectors **x**, **y **and

their images *A***x **and *A***y**. Initially, the images of **x**

and **y **should not be orthogonal. Use the mouse to

rotate the **x **and **y **vectors counterclockwise until

their images *A***x **and *A***y **become orthogonal. When

the images are orthogonal, **x **and **y **are right singular

vectors of *A*.When **x **and **y **are right singular vectors,

how are the singular values and left singular vectors

related to the images *A***x **and *A***y**? Explain. Note that

when you rotate a full 360◦ the image of the unit

circle traces out as an ellipse. How do the singular

values and singular vectors relate to the axes of the

ellipse?

Optimization

- Use the following MATLAB commands to construct a symbolic function:

**syms **x y

*f *= (*y *+ 1)^3 + *x *∗ *y*^2 + *y*^2 − 4 ∗ *x *∗ *y *− 4 ∗ *y *+ 1

Compute the first partials of *f *and the Hessian of *f *by setting

*fx *= **diff**(*f *, *x*), *fy *= **diff**(*f *, *y*)

*H *= [**diff**(*fx*, *x*), **diff**(*fx*, *y*); **diff**(*fy*, *x*), **diff**(*fy*, *y*)]

We can use the **subs **command to evaluate the Hessian for any pair (*x*, *y*). For example, to evaluate the Hessian

when *x *= 3 and *y *= 5, set

H1 = **subs**(*H*, [*x*, *y*], [3, 5])

Use the MATLAB command **solve**(*fx*, *fy*) to determine vectors **x **and **y **containing the *x *and *y *coordinates of the

stationary points. Evaluate the Hessian at each stationary point and then determine whether the stationary point is

a local maximum, local minimum, or saddle point.

Positive Definite Matrices

- Set

*C *= **ones**(6) + 7 ∗ **eye**(6)

and

[*X*,*D*] = **eig**(*C*)

**(a) **Even though *λ *= 7 is an eigenvalue of multiplicity

5, the matrix *C *cannot be defective.

Why? Explain. Check that *C *is not defective by

computing the rank of *X*. Compute also *X**T**X*.

What type of matrix is *X*? Explain. Compute

also the rank of *C *− 7*I*. What can you conclude

about the dimension of the eigenspace

corresponding to *λ *= 7? Explain.

**(b) **The matrix *C *should be symmetric positive

definite. Why? Explain. Thus, *C *should have

a Cholesky factorization *LL**T *. The MATLAB

command *R *= **chol**(*C*) will generate an upper

triangular matrix *R *that is equal to *L**T *.

Compute *R *in this manner and set *L *= *R*_. Use

MATLAB to verify that

*C *= *LL**T *= *R**T**R*

**(c) **Alternatively, one can determine the Cholesky

factors from the *LU *factorization of *C*. Set

[*L U*] = **lu**(*C*)

and

*D *= **diag**(**sqrt**(**diag**(*U*)))

and

*W *= (*L *∗ *D*)_

How do *R *and *W *compare? This method of

computing the Cholesky factorization is less

efficient than the method MATLAB uses for its

**Chol **function.

- For various values of
*k*, form an*k*×*k*matrix*A*by

setting

*D *= **diag**(**ones**(*k *− 1, 1), 1);

*A *= 2 ∗ **eye**(*k*) − *D *− *D*_;

In each case, compute the *LU *factorization of *A*

and the determinant of *A*. If *A *is an *n *× *n *matrix of

this form, what will its *LU *factorization be? What

will its determinant be? Why must the matrix be

positive definite?

- For any positive integer
*n*, the MATLAB command

*P *= **pascal**(*n*) will generate an *n *× *n *matrix *P*

whose entries are given by

*p**ij *=

_

1 if *i *=1 or *j *= 1

*p**i*−1,*j *+ *p**i*,*j*−1 if *i **>*1 and *j **> *1

The name **pascal **refers to Pascal’s triangle, a triangular

array of numbers that is used to generate

binomial coefficients. The entries of the matrix *P*

form a section of Pascal’s triangle.

**(a) **Set

*P *= **pascal**(6)

and compute the value of its determinant. Now

subtract 1 from the (6, 6) entry of *P *by setting

*P*(6, 6) = *P*(6, 6) − 1

and compute the determinant of the new matrix

*P*. What is the overall effect of subtracting

1 from the (6, 6) entry of the 6 × 6 Pascal

matrix?

**(b) **In part (a) we saw that the determinant of the

6 × 6 Pascal matrix is 1, but if we subtract 1

from the (6, 6) entry, the matrix becomes singular.

Will this happen in general for *n *× *n*

Pascal matrices? To answer this question, consider

the cases *n *= 4, 8, 12. In each case, set

*P *= **pascal**(*n*) and compute its determinant.

Next, subtract 1 from the (*n*, *n*) entry and compute

the determinant of the resulting matrix.

Does the property that we discovered in part (a)

appear to hold for Pascal matrices in general?

**(c) **Set

*P *= **pascal**(8)

and examine its leading principal submatrices.

Assuming that all Pascal matrices have determinants

equal to 1, why must *P *be positive definite?

Compute the upper triangular Cholesky

factor *R *of *P*. How can the nonzero entries of

*R *be generated as a Pascal triangle? In general,

how is the determinant of a positive definite

matrix related to the determinant of one of its

Cholesky factors? Why must det(*P*) = 1?

**(d) **Set

*R*(8, 8) =0 and *Q *= *R*_ ∗ *R*

The matrix *Q *should be singular. Why? Explain.

Why must the matrices *P *and *Q *be the

same except for the (8, 8) entry? Why must

*q*88 = *p*88 − 1? Explain. Verify the relation

between *P *and *Q *by computing the difference

*P *− *Q*.

**CHAPTER TEST A ****True or False**

*In each of the following answer *true *if the statement is*

*always true and *false *otherwise. In the case of a true*

*statement, explain or prove your answer. In the case*

*of a false statement, give an example to show that the*

*statement is not always true.*

- If
*A*is an*n*×*n*matrix whose eigenvalues are all

nonzero, then *A *is nonsingular.

- If
*A*is an*n*×*n*matrix, then*A*and*A**T*have the

same eigenvectors.

- If
*A*and*B*are similar matrices, then they have the

same eigenvalues.

- If
*A*and*B*are*n*×*n*matrices with the same

eigenvalues, then they are similar.

- If
*A*has eigenvalues of multiplicity greater than 1,

then *A *must be defective.

- If
*A*is a 4 × 4 matrix of rank 3 and*λ*= 0 is an

eigenvalue of multiplicity 3, then *A *is diagonalizable.

- If
*A*is a 4 × 4 matrix of rank 1 and*λ*= 0 is an

eigenvalue of multiplicity 3, then *A *is defective.

- The rank of an
*n*×*n*matrix*A*is equal to the number

of nonzero eigenvalues of *A*, where eigenvalues

are counted according to multiplicity.

- The rank of an
*m*×*n*matrix*A*is equal to the number

of nonzero singular values of *A*, where singular

values are counted according to multiplicity.

- If
*A*is Hermitian and*c*is a complex scalar, then*cA*

is Hermitian.

- If an
*n*×*n*matrix*A*has Schur decomposition

*A *= *UTU**H*, then the eigenvalues of *A *are

*t*11, *t*22, *. . . *, *t**nn*.

- If
*A*is normal, but not Hermitian, then*A*must have

at least one complex eigenvalue.

- If
*A*is symmetric positive definite, then*A*is

nonsingular and *A*−1 is also symmetric positive

definite.

- If
*A*is symmetric and det(*A*)*>*0, then*A*is positive

definite.

- If
*A*is symmetric, then*e**A*is symmetric positive

definite.

**CHAPTER TEST B**

- Let

*A *=

⎧⎪⎪⎪⎪⎩

1 0 0

1 1 −1

1 2 −2

⎫⎪⎪⎪⎪⎭

**(a) **Find the eigenvalues of *A*.

**(b) **For each eigenvalue, find a basis for the corresponding

eigenspace.

**(c) **Factor *A *into a product *XDX*−1 where *D *is a

diagonal matrix, and then use the factorization

to compute *A*7.

- Let
*A*be a 4 × 4 matrix with real entries that has all

1’s on the main diagonal (i.e., *a*11 = *a*22 = *a*33 =

*a*44 = 1). If *A *is singular and *λ*1 = 3 + 2*i *is an

eigenvalue of *A*, then what, if anything, is it possible

to conclude about the values of the remaining

eigenvalues *λ*2, *λ*3, and *λ*4? Explain.

- Let
*A*be a nonsingular*n*×*n*matrix and let*λ*be an

eigenvalue of *A*.

**(a) **Show that *λ *_= 0.

**(b) **Show that

1

*λ*

is an eigenvalue of *A*−1.

- Show that if
*A*is a matrix of the form

*A *=

⎧⎪⎪⎪⎪⎪⎩

*a *0 0

0 *a *1

0 0 *a*

⎫⎪⎪⎪⎪⎪⎭

then *A *must be defective.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

4 2 2

2 10 10

2 10 14

⎫⎪⎪⎪⎪⎪⎭

**(a) **Without computing the eigenvalues of *A*, show

that *A *is positive definite.

**(b) **Factor *A *into a product *LDL**T *where *L *is unit

lower triangular and *D *is diagonal.

**(c) **Compute the Cholesky factorization of *A***.**

- The function

*f *(*x*, *y*) = *x*3*y *+ *x*2 + *y*2 − 2*x *− *y *+ 4

has a stationary point (1, 0). Compute the Hessian

of *f *at (1, 0), and use it to determine whether

the stationary point is a local maximum, local

minimum or saddle point.

- Given

**Y**_(*t*) = *A***Y**(*t*) **Y**(0) = **Y**0

where

*A *=

⎧⎪⎩

1 −2

3 −4

⎫⎪⎭

**Y**0 =

⎧⎪⎩

1

2

⎫⎪⎭

compute *e**tA *and use it to solve the initial value

problem.

- Let
*A*be a 4 × 4 real symmetric matrix with

eigenvalues

*λ*1 = 1, *λ*2 = *λ*3 = *λ*4 = 0

**(a) **Explain why the multiple eigenvalue *λ *= 0

must have three linearly independent eigenvectors

**x**2, **x**3, **x**4.

**(b) **Let **x**1 be an eigenvector belonging to *λ*1. How

is **x**1 related to **x**2, **x**3, and **x**4? Explain.

**(c) **Explain how to use **x**1, **x**2, **x**3, and **x**4 to construct

an orthogonal matrix *U *that diagonalizes

*A*.

**(d) **What type of matrix is *e**A*? Is it symmetric? Is

it positive definite? Explain your answers.

- Let {
**u**1,**u**2} be an orthonormal basis for C2 and

suppose that a vector **z **can be written as a linear

combination

**z **= (5 − 7*i*)**u**1 + *c*2**u**2

**(a) **What are the values of **u***H*1

**z **and **z***H***u**1? If

**z***H***u**2 = 1 + 5*i*, determine the value of *c*2.

**(b) **Use the results from part (a) to determine the

value of _**z**_2.

- Let
*A*be a 5 × 5 nonsymmetric matrix with rank

equal to 3, let *B *= *A**T**A*, and let *C *= *e**B*.

**(a) **What, if anything, can you conclude about the

nature of the eigenvalues of *B*? Explain. What

words best describe the type of matrix that

*B *is?

**(b) **What, if anything, can you conclude about the

nature of the eigenvalues of *C*? Explain. What

words best describe the type of matrix that

*C *is?

- Let
*A*and*B*be*n*×*n*matrices.

**(a) **If *A *is real and nonsymmetric with Schur

decomposition *UTU**H*, then what types of

matrices are *U *and *T*? How are the eigenvalues

of *A *related to *U *and *T*? Explain your answers.

**(b) **If *B *is Hermitian with Schur decomposition

*WSW**H*, then what types of matrices are *W *and

*S*? How are the eigenvalues and eigenvectors of

*B *related to *W *and *S*? Explain your answers.

- Let
*A*be a matrix whose singular value decomposition is given by

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

2

5

−2

5

−2

5

−2

5

3

5

2

5

−2

5

−2

5

3

5

−2

5

2

5

−2

5

3

5

−2

5

−2

5

2

5

3

5

−2

5

−2

5

−2

5

3

5

2

5

2

5

2

5

2

5

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

100 0 0 0

0 10 0 0

0 0 10 0

0 0 0 0

0 0 0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

2

1

2

1

2

1

2

1

2

−1

2

−1

2

1

2

−1

2

−1

2

1

2

1

2

−1

2

1

2

−1

2

1

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

Make use of the singular value decomposition to do each of the following:

**(a) **Determine the rank of *A*.

**(b) **Find an orthonormal basis for *R*(*A*).

**(c) **Find an orthonormal basis for *N*(*A*).

**(d) **Find the matrix *B *that is the closest matrix of rank 1 to *A*. (The distance between matrices is measured using

the Frobenius norm.)

**(e) **Let *B *be the matrix asked for in part (d). Use the singular values of *A *to determine the distance between *A*

and *B *(i.e., use the singular values of *A *to determine the value of ||*B *− *A*||*F*).

** **

**Chapter 7 ****Numerical Linear Algebra**

SECTION 7.1 EXERCISES

- Find the three-digit decimal floating-point representation

of each of the following numbers:

**(a) **2312 **(b) **32.56

**(c) **0.01277 **(d) **82,431

- Find the absolute error and the relative error when

each of the real numbers in Exercise 1 is approximated

by a three-digit decimal floating-point

number.

- Represent each of the following numbers as normalized

base 2 floating-point numbers using 4

digits to represent the fractional part of the mantissa;

that is, represent the numbers in the form

±(1.*b*1*b*2*b*3*b*4)2 × 2*k*.

**(a) **21 **(b) **3

8

**(c) **9.872 **(d) **−0.1

- Use four-digit decimal floating-point arithmetic to

do each of the following and calculate the absolute

and relative errors in your answers:

**(a) **10,420 + 0.0018 **(b) **10,424 − 10,416

**(c) **0.12347 − 0.12342 **(d) **(3626.6) · (22.656)

- Let
*x*1 = 94,210,*x*2 = 8631,*x*3 = 1440,*x*4 = 133,

and *x*5 = 34. Calculate each of the following, using

four-digit decimal floating-point arithmetic:

**(a) **(((*x*1 + *x*2) + *x*3) + *x*4) + *x*5

**(b) ***x*1 + ((*x*2 + *x*3) + (*x*4 + *x*5))

**(c) **(((*x*5 + *x*4) + *x*3) + *x*2) + *x*1

- What would the machine epsilon be for a computer

that uses 16-digit base 10 floating-point

arithmetic?

- What would the machine epsilon be for a computer

that uses 36-digit base 2 floating-point

arithmetic?

- How many floating-point numbers are there in the

system if *t *= 2, *L *= −2, *U *= 2, and *β *= 2?

- In each of the following you are given a bit sequence

corresponding to the IEEE single precision

representation of a floating-point number. In each

case determine the base 2 floating-point representation

of the number and also the base 10 decimal

representation of the number.

**(a) **01000001000110100000000000000000

**(b) **10111100010110000000000000000000

**(c) **11000100010010000000000000000000

- When the following functions are evaluated at

values of *x *that are close to 0 there will be a

loss of significant digits of accuracy. For each

function: (i) use identities or Taylor series approximations

to find an alternative representation

of the function that avoids cancellation of significant

digits, (ii) use a hand calculator or computer

to evaluate the function by plugging in the

value *x *= 10−8 and also evaluate the alternative

representation of the function at the point

*x *= 10−8.

**(a) ***f *(*x*) = 1 − cos *x*

sin *x*

**(b) ***f *(*x*) = *e**x *− 1

**(c) ***f *(*x*) = sec *x *− cos *x ***(d) ***f *(*x*) = sin *x*

*x*

− 1

SECTION 7.2 EXERCISES

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

1 1 1

2 4 1

−3 1 −2

⎫⎪⎪⎪⎪⎪⎭

Factor *A *into a product *LU*, where *L *is lower triangular

with 1’s along the diagonal and *U *is upper

triangular.

- Let
*A*be the matrix in Exercise 1. Use the*LU*factorization

of *A *to solve *A***x **= **b **for each of the

following choices of **b**:

**(a) **(4, 3,−13)*T ***(b) **(3, 1,−10)*T*

**(c) **(7, 23, 0)*T*

- Let
*A*and*B*be*n*×*n*matrices and let**x**∈ R*n*.

**(a) **How many scalar additions and multiplications

are necessary to compute the product *A***x**?

**(b) **How many scalar additions and multiplications

are necessary to compute the product *AB*?

**(c) **How many scalar additions and multiplications

are necessary to compute (*AB*)**x**? To compute

*A*(*B***x**)?

- Let
*A*∈ R*m*×*n*,*B*∈ R*n*×*r*, and**x**,**y**∈ R*n*. Suppose

that the product *A***xy***T**B *is computed in the following

ways:

**(i) **(*A*(**xy***T *))*B ***(ii) **(*A***x**)(**y***T**B*)

**(iii) **((*A***x**)**y***T *)*B*

**(a) **How many scalar additions and multiplications

are necessary for each of these computations?** (b) **Compare the number of scalar additions and

multiplications for each of the three methods

when *m *= 5, *n *= 4, and *r *= 3. Which method

is most efficient in this case?

- Let
*E**ki*be the elementary matrix formed by subtracting

*α *times the *i*th row of the identity matrix

from the *k*th row.

**(a) **Show that *E**ki *= *I *− *α***e***k***e***Ti*

.

**(b) **Let *E**ji *= *I *− *β***e***j***e***Ti*

. Show that

*E**ji**E**ki *= *I *− (*α***e***k *+ *β***e***j*)**e***Ti*

.

**(c) **Show that *E*−1

*ki*

= *I *+ *α***e***k***e***Ti*

.

- Let
*A*be an*n*×*n*matrix with triangular factorization *LU*. Show that

det(*A*) = *u*11*u*22 · · · *u**nn*

- If
*A*is a symmetric*n*×*n*matrix with triangular factorization

*LU*, then *A *can be factored further into

a product *LDL**T *(where *D *is diagonal). Devise an

algorithm, similar to Algorithm 7.2.2, for solving

*LDL**T***x **= **b**.

- Write an algorithm for solving the tridiagonal

system

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*a*1 *b*1

*c*1 *a*2

. . .

. . .

. . . *a**n*−1 *b**n*−1

*c**n*−1 *a**n*

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*x*1

*x*2

…

*x**n*−1

*x**n*

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*d*1

*d*2

…

*d**n*−1

*d**n*

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

by Gaussian elimination with the diagonal elements

as pivots. How many additions/subtractions

and multiplications/divisions are necessary?

- Let
*A*=*LU*, where*L*is lower triangular with 1’s

on the diagonal and *U *is upper triangular.

**(a) **How many scalar additions and multiplications

are necessary to solve *L***y **= **e***j *by forward

substitution?

**(b) **How many additions/subtractions and multiplications/

divisions are necessary to solve

*A***x **= **e***j*? The solution **x***j *of *A***x **= **e***j *will be

the *j*th column of *A*−1.

**(c) **Given the factorization *A *= *LU*, how many

additional multiplications/divisions and additions/

subtractions are needed to compute *A*−1?

- Suppose that
*A*−1 and the*LU*factorization of*A*

have already been determined. How many scalar

additions and multiplications are necessary to compute

*A*−1**b**? Compare this number with the number

of operations required to solve *LU***x **= **b **using Algorithm

7.2.2. Suppose that we have a number of

systems to solve with the same coefficient matrix

*A*. Is it worthwhile to compute*A*−1? Explain.- Let
*A*be a 3 × 3 matrix and assume that*A*can

be transformed into a lower triangular matrix *L *by

using only column operations of type III; that is,

*AE*1*E*2*E*3 = *L*

where *E*1, *E*2, *E*3 are elementary matrices of

type III. Let

*U *= (*E*1*E*2*E*3)−1

Show that *U *is upper triangular with 1’s on the diagonal

and *A *= *LU*. (This exercise illustrates a

column version of Gaussian elimination.)

SECTION 7.3 EXERCISES

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

0 3 1

1 2 −2

2 5 4

⎫⎪⎪⎪⎪⎪⎭

and **b **=

⎧⎪⎪⎪⎪⎪⎩

1

7

−1

⎫⎪⎪⎪⎪⎪⎭

**(a) **Reorder the rows of (*A*|**b**) in the order (2, 3, 1)

and then solve the reordered system.

**(b) **Factor *A *into a product *P**T**LU*, where *P *is

the permutation matrix corresponding to the

reordering in part (a).

- Let
*A*be the matrix in Exercise 1. Use the factorization

*P**T**LU *to solve *A***x **= **c **for each of the following

choices of **c**:

**(a) **(8, 1, 20)*T ***(b) **(−9,−2,−7)*T*

**(c) **(4, 1, 11)*T*

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

1 8 6

−1 −4 5

2 4 −6

⎫⎪⎪⎪⎪⎪⎭

and **b **=

⎧⎪⎪⎪⎪⎪⎩

8

1

4

⎫⎪⎪⎪⎪⎪⎭ Solve the system *A***x **=

**b**

using partial pivoting. If

*P *is the permutation matrix corresponding to the

pivoting strategy, factor *PA *into a product *LU*.

- Let

*A *=

⎧⎪⎩

3 2

2 4

⎫⎪⎭

and **b **=

⎧⎪⎩

5

−2

⎫⎪⎭

Solve the system *A***x **= **b **using complete pivoting.

Let *P *be the permutation matrix determined by

the pivot rows and *Q *the permutation matrix determined

by the pivot columns. Factor *PAQ *into a

product *LU*.

- Let
*A*be the matrix in Exercise 4 and let**c**=

(6,−4)*T *. Solve the system *A***x **= **c **in two steps:

**(a) **Set **z **= *Q**T***x **and solve *LU***z **= *P***c **for **z**.

**(b) **Calculate **x **= *Q***z**.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

5 4 7

2 −4 3

2 8 6

⎫⎪⎪⎪⎪⎪⎭

,

**b **=

⎧⎪⎪⎪⎪⎪⎩

2

−5

4

⎫⎪⎪⎪⎪⎪⎭

, **c **=

⎧⎪⎪⎪⎪⎪⎩

5

−4

2

⎫⎪⎪⎪⎪⎪⎭

**(a) **Use complete pivoting to solve the system

*A***x **= **b**.

**(b) **Let *P *be the permutation matrix determined

by the pivot rows, and let *Q *be the permutation

matrix determined by the pivot columns.

Factor *PAQ *into a product *LU*.

**(c) **Use the *LU *factorization from part (b) to solve

the system *A***x **= **c**.

- The exact solution of the system

0.6000*x*1 + 2000*x*2 = 2003

0.3076*x*1 − 0.4010*x*2 = 1.137

is **x **= (5, 1)*T *. Suppose that the calculated value of

**x**2 is *x*_

2

= 1+*e*. Use this value in the first equation

and solve for *x*1. What will the error be? Calculate

the relative error in *x*1 if *e *= 0.001.

- Solve the system in Exercise 7 using four-digit

decimal floating-point arithmetic and Gaussian

elimination with partial pivoting.

- Solve the system in Exercise 7 using four-digit

decimal floating-point arithmetic and Gaussian

elimination with complete pivoting.

- Use four-digit decimal floating-point arithmetic,

and scale the system in Exercise 7 by multiplying

the first equation through by 1*/*2000 and the

second equation through by 1*/*0.4010. Solve the

scaled system using partial pivoting.

SECTION 7.4 EXERCISES

- Determine ·
*F*, · ∞, and · 1 for each of the

following matrices:

**(a)**

⎧⎪⎩

1 0

0 1

⎫⎪⎭

**(b)**

⎧⎪⎩

1 4

−2 2

⎫⎪⎭

**(c)**

⎧⎪⎪⎪⎪⎪⎩

1

2

1

2

1

2

1

2

⎫⎪⎪⎪⎪⎪⎭

**(d)**

⎧⎪⎪⎪⎪⎪⎩

0 5 1

2 3 1

1 2 2

⎫⎪⎪⎪⎪⎪⎭

**(e)**

⎧⎪⎪⎪⎪⎪⎩

5 0 5

4 1 0

3 2 1

⎫⎪⎪⎪⎪⎪⎭

- Let

*A *=

⎧⎪⎩

2 0

0 −2

⎫⎪⎭

and **x **=

⎧⎪⎩

*x*1

*x*2

⎫⎪⎭

and set

*f *(*x*1, *x*2) = *A***x**2*/***x**2

Determine the value of *A*2 by finding the maximum

value of *f *for all (*x*1, *x*2) _= (0, 0).

- Let

*A *=

⎧⎪⎩

1 0

0 0

⎫⎪⎭

Use the method of Exercise 2 to determine the

value of *A*2.

- Let

*D *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

3 0 0 0

0 −5 0 0

0 0 −2 0

0 0 0 4

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Compute the singular value decomposition

of *D*.

**(b) **Find the value of *D*2.

- Show that if
*D*is an*n*×*n*diagonal matrix then

*D*2 = max

1≤*i*≤*n*

(|*d**ii*|)

- If
*D*is an*n*×*n*diagonal matrix, how do the values

of *D*1, *D*2, and *D*∞ compare? Explain your

answers.

- Let
*I*denote the*n*×*n*identity matrix. Determine

the values of *I*1, *I*∞, and *I**F*.

- Let ·
*M*denote a matrix norm on R*n*×*n*, ·*V*denote

a vector norm on R*n*, and *I *be the *n*×*n *identity

matrix. Show that

**(a) **If ·*M *and ·*V *are compatible, then *I**M *≥

1.

**(b) **If ·*M *is subordinate to ·*V *, then *I**M *= 1.

- A vector
**x**in R*n*can also be viewed as an*n*× 1

matrix *X*:

**x **= *X *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*x*1

*x*2

…

*x**n*

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **How do the matrix norm *X*∞ and the vector

norm **x**∞ compare? Explain.

**(b) **How do the matrix norm *X*1 and the vector

norm **x**1 compare? Explain.

- A vector
**y**in R*n*can also be viewed as an*n*× 1

matrix *Y *= (**y**). Show that

*Y*2 =**y**2**(b)***Y**T*2 =**y**2

- Let
*A*=**wy***T*where**w**∈ R*m*and**y**∈ R*n*. Show that

**(a)**

*A***x**2

**x**2

≤ **y**2**w**2 for all **x **_= **0 **in R*n*.

**(b) ***A*2 = **y**2**w**2

- Let

*A *=

⎧⎪⎪⎪⎪⎩

3 −1 −2

−1 2 −7

4 1 4

⎫⎪⎪⎪⎪⎭

**(a) **Determine *A*∞.

**(b) **Find a vector **x **whose coordinates are each ±1

such that *A***x**∞ = *A*∞. (Note that **x**∞ =

1, so *A*∞ = *A***x**∞*/***x**∞.)

- Theorem 7.4.2 states that

*A*∞ = max

1≤*i*≤*m*

⎛

⎝

*n*

*j*=1

|*a**ij*|

⎞

⎠

Prove this in two steps.

**(a) **Show first that

*A*∞ ≤ max

1≤*i*≤*m*

⎛

⎝

*n*

*j*=1

|*a**ij*|

⎞

⎠

**(b) **Construct a vector **x **whose coordinates are

each ±1 such that

*A***x**∞

**x**∞

= *A***x**∞ = max

1≤*i*≤*m*

⎛

⎝

*n*

*j*=1

|*a**ij*|

⎞

⎠

- Show that
*A**F*=*A**TF*. - Let
*A*be a symmetric*n*×*n*matrix. Show that

*A*∞ = *A*1.

- Let
*A*be a 5 × 4 matrix with singular values*σ*1 =

5, *σ*2 = 3, and *σ*3 = *σ*4 = 1. Determine the values

of *A*2 and *A**F*.

- Let
*A*be an*m*×*n*matrix.

**(a) **Show that *A*2 ≤ *A**F*.

**(b) **Under what circumstances will *A*2 = *A**F*?

- Let · denote a family of vector norms and let

*M*be a subordinate matrix norm. Show that

*A**M *= max

**x**=1

*A***x**

- Let
*A*be an*m*×*n*matrix and let · v and · w

be vector norms on R*n *and R*m*, respectively. Show

that

*A*(v,w) = max

**x**_=**0**

*A***x**w

**x**v

defines a matrix norm on R*m*×*n*.

- Let
*A*be an*m*×*n*matrix. The (1,2)-norm of*A*is

given by

*A*(1,2) = max

**x**_=**0**

*A***x**2

**x**1

(See Exercise 19.) Show that

*A*(1,2) = max (**a**12, **a**22, *. . . *, **a***n*2)

- Let
*A*be an*m*×*n*matrix. Show that*A*(1,2) ≤*A*2 - Let
*A*∈ R*m*×*n*and*B*∈ R*n*×*r*. Show that

**(a) ***A***x**2 ≤ *A*(1,2)**x**1 for all **x **in R*n*.

**(b) ***AB*(1,2) ≤ *A*2*B*(1,2)

**(c) ***AB*(1,2) ≤ *A*(1,2)*B*1

- Let
*A*be an*n*×*n*matrix and let ·*M*be a matrix

norm that is compatible with some vector norm

on R*n*. Show that if *λ *is an eigenvalue of *A*, then

|*λ*| ≤ *A**M*.

- Use the result from Exercise 23 to show that if
*λ*is

an eigenvalue of a stochastic matrix, then |*λ*| ≤ 1.

- Sudoku is a popular puzzle involving matrices. In

this puzzle one is given some of the entries of a

9 ×9 matrix *A *and asked to fill in the missing

entries. The matrix *A *has block structure

*A *=

⎧⎪⎪⎪⎪⎪⎩

*A*11 *A*12 *A*13

*A*21 *A*22 *A*23

*A*31 *A*32 *A*33

⎫⎪⎪⎪⎪⎪⎭

where each submatrix *A**ij *is 3 × 3. The rules of the

puzzle are that each row, each column, and each of

the submatrices of *A *must be made up of all of the

integers 1 through 9. We will refer to such a matrix

as a *sudoku matrix*. Show that if *A *is a sudoku

matrix, then *λ *= 45 is its dominant eigenvalue.

- Let
*A**ij*be a submatrix of a sudoku matrix*A*(see

Exercise 25). Show that if *λ *is an eigenvalue of *A**ij*,

then |*λ*| ≤ 22.

- Let
*A*be an*n*×*n*matrix and**x**∈ R*n*. Prove:

**(a) ***A***x**∞ ≤ *n*1*/*2*A*2**x**∞

**(b) ***A***x**2 ≤ *n*1*/*2*A*∞**x**2

**(c) ***n*−1*/*2*A*2 ≤ *A*∞ ≤ *n*1*/*2*A*2

- Let
*A*be a symmetric*n*×*n*matrix with eigenvalues

*λ*1, *. . . *, *λ**n *and orthonormal eigenvectors

**u**1, *. . . *, **u***n*. Let **x **∈ R*n *and let *c**i *= **u***Ti*

**x **for

*i *= 1, 2, *. . . *, *n*. Show that

**(a) ***A***x**22

=

*n*

*i*=1

(*λ**i**c**i*)2

**(b) **if **x **_= **0**, then

min

1≤*i*≤*n*

|*λ**i*| ≤

*A***x**2

**x**2

≤ max

1≤*i*≤*n*

|*λ**i*|

**(c) ***A*2 = max

1≤*i*≤*n*

|*λ**i*|

- Let

*A *=

⎧⎪⎩

1 −0.99

−1 1

⎫⎪⎭

Find *A*−1 and cond∞(*A*).

- Solve the given two systems and compare the

solutions. Are the coefficient matrices well conditioned?

Ill conditioned? Explain.

1.0*x*1 + 2.0*x*2 = 1.12

1.000*x*1 + 2.011*x*2=1.120

2.0*x*1 + 3.9*x*2 = 2.16

2.000*x*1 + 3.982*x*2=2.160

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

1 0 1

2 2 3

1 1 2

⎫⎪⎪⎪⎪⎪⎭

Calculate cond∞(*A*) = *A*∞*A*−1∞.

- Let
*A*be a nonsingular*n*×*n*matrix, and let ·*M*

denote a matrix norm that is compatible with some

vector norm on R*n*. Show that

cond*M*(*A*) ≥ 1

- Let

*A**n *=

⎧⎪⎪⎪⎪⎩

1 1

1 1− 1

*n*

⎫⎪⎪⎪⎪⎭

for each positive integer *n*. Calculate

**(a) ***A*−1

*n ***(b) **cond∞(*A**n*) **(c) **lim

*n*→∞

cond∞(*A**n*)

- If
*A*is a 5×3 matrix with*A*2 = 8, cond2(*A*) = 2,

and *A**F *= 12, determine the singular values

of *A*.

- Given

*A *=

⎧⎪⎩

3 2

1 1

⎫⎪⎭

and **b **=

⎧⎪⎩

5

2

⎫⎪⎭

If two-digit decimal floating-point arithmetic is

used to solve the system *A***x **= **b**, the computed

solution will be **x **= (1.1, 0.88)*T *.

**(a) **Determine the residual vector **r **and the value

of the relative residual **r**∞*/***b**∞.

**(b) **Find the value of cond∞(*A*).

**(c) **Without computing the exact solution, use the

results from parts (a) and (b) to obtain bounds

for the relative error in the computed solution.

**(d) **Compute the exact solution **x **and determine

the actual relative error. Compare this to the

bounds derived in part (c).

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

−0.50 0.75 −0.25

−0.50 0.25 0.25

1.00 −0.50 0.50

⎫⎪⎪⎪⎪⎪⎭

Calculate cond1(*A*) = *A*1*A*−11.

- Let
*A*be the matrix in Exercise 36 and let

*A*_ =

⎧⎪⎪⎪⎪⎪⎩

−0.5 0.8 −0.3

−0.5 0.3 0.3

1.0 −0.5 0.5

⎫⎪⎪⎪⎪⎪⎭

Let **x **and **x**_ be the solutions of *A***x **= **b **and

*A*_**x **= **b**, respectively, for some **b **∈ R3. Find a

bound for the relative error (**x **− **x**_1)*/***x**_1.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 −1 −1 −1

0 1 −1 −1

0 0 1 −1

0 0 0 1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

, **b **=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

5.00

1.02

1.04

1.10

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

An approximate solution of *A***x **= **b **is calculated by

rounding the entries of **b **to the nearest integer and

then solving the rounded system with integer arithmetic.

The calculated solution is **x**_ = (12, 4, 2, 1)*T *.

Let **r **denote the residual vector.

**(a) **Determine the values of **r**∞ and cond∞(*A*).

**(b) **Use your answer to part (a) to find an upper

bound for the relative error in the solution.

**(c) **Compute the exact solution **x **and determine

the relative error

**x **− **x**_∞

**x**∞

.

- Let
*A*and*B*be nonsingular*n*×*n*matrices. Show

that

cond(*AB*) ≤ cond(*A*) cond(*B*)

- Let
*D*be a nonsingular*n*×*n*diagonal matrix and

let

*d*max = max

1≤*i*≤*n*

|*d**ii*| and *d*min = min

1≤*i*≤*n*

|*d**ii*|

**(a) **Show that

cond1(*D*) = cond∞(*D*) = *d*max

*d*min

**(b) **Show that

cond2(*D*) = *d*max

*d*min

- Let
*Q*be an*n*×*n*orthogonal matrix. Show that

**(a) ***Q*2 = 1 **(b) **cond2(*Q*) = 1

**(c) **for any **b **∈ R*n*, the relative error in the solution

of *Q***x **= **b **is equal to the relative residual,

that is,

**e**2

**x**2

=

**r**2

**b**2

- Let
*A*be an*n*×*n*matrix and let*Q*and*V*be*n*×*n*

orthogonal matrices. Show that

**(a) ***QA*2 = *A*2 **(b) ***AV*2 = *A*2

**(c) ***QAV*2 = *A*2

- Let
*A*be an*m*×*n*matrix and let*σ*1 be the largest

singular value of *A*. Show that if **x **and **y **are

nonzero vectors in R*n*, then each of the following

holds:

**(a)**

|**x***T**A***y**|

**x**2 **y**2

≤ *σ*1

[*Hint*: Make use of the Cauchy-Schwarz inequality.]

**(b) **max

**x**_=**0**,**y**_=**0**

|**x***T**A***y**|

**x** **y**

= *σ*1

- Let
*A*be an*m*×*n*matrix with singular value

decomposition *U**_**V**T *. Show that

min

**x**_=**0**

*A***x**2

**x**2

= *σ**n*

- Let
*A*be an*m*×*n*matrix with singular value decomposition

*U**_**V**T *. Show that, for any vector **x **∈ R*n*,

*σ**n***x**2 ≤ *A***x**2 ≤ *σ*1**x**2

- Let
*A*be a nonsingular*n*×*n*matrix and let*Q*be an

*n *× *n *orthogonal matrix. Show that

**(a) **cond2(*QA*) = cond2(*AQ*) = cond2(*A*)

**(b) **if *B *= *Q**T**AQ*, then cond2(*B*) = cond2(*A*).

- Let
*A*be a symmetric nonsingular*n*×*n*matrix with

eigenvalues *λ*1, *. . . *, *λ**n*. Show that

cond2(*A*) =

max

1≤*i*≤*n*

|*λ**i*|

min

1≤*i*≤*n*

|*λ**i*|

SECTION 7.5 EXERCISES

- For each of the following vectors
**x**, find a rotation

matrix *R *such that *R***x **= **x**2**e**1:

**(a) x **= (1, 1)*T ***(b) x **= (

√

3,−1)*T*

**(c) x **= (−4, 3)*T*

- Given
**x**∈ R3, define

*r**ij *=

_

*x*2

*i*

+ *x*2

*j*

_1*/*2 *i*, *j *= 1, 2, 3

For each of the following, determine a Givens

transformation *G**ij *such that the *i*th and *j*th coordinates

of *G**ij***x **are *r**ij *and 0, respectively:

**(a) x **= (3, 1, 4)*T *, *i *= 1, *j *= 3

**(b) x **= (1,−1, 2)*T *, *i *= 1, *j *= 2

**(c) x **= (4, 1,

√

3)*T *, *i *= 2, *j *= 3

**(d) x **= (4, 1,

√

3)*T *, *i *= 3, *j *= 2

- For each of the given vectors
**x**, find a Householder

transformation that zeros out the last two entries of

the vector.

**(a) x **= (−1, 8,−4)*T ***(b) x **= (3, 6, 2)*T*

**(c) x **= (0,−3, 4)*T*

- For each of the following, find a Householder transformation

that zeroes out the last two coordinates of

the vector:

**(a) x **= (5, 1, 4, 8)*T*

**(b) x **= (4,−3,−2,−1, 2)*T*

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 3 −2

1 1 1

1 −5 1

1 −1 2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭ **(**

**a)**Determine the scalar*β*

and vector **v**

for the

Householder matrix *H *= *I *− (1*/β*)**vv***T *that

zeroes out the last three entries of **a**1.

**(b) **Without explicitly forming the matrix *H*, compute

the product *HA*.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎩

−1 3

2

1

2

2 8 8

−2 −7 1

⎫⎪⎪⎪⎪⎪⎪⎪⎭

and **b **=

⎧⎪⎪⎪⎪⎪⎪⎪⎩

11

2

0

1

⎫⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Use Householder transformations to transform

*A *into an upper triangular matrix *R*. Also,

transform the vector **b**; that is, compute **c **=

*H*2*H*1**b**.

**(b) **Solve *R***x **= **c **for **x **and check your answer by

computing the residual **r **= **b **− *A***x**.

- For each of the following systems, use a Givens reflection

to transform the system to upper triangular

form and then solve the upper triangular system:

**(a) **3*x*1 + 8*x*2 = 5

4*x*1 − *x*2 = −5

**(b) ***x*1 + 4*x*2 = 5

*x*1 + 2*x*2 = 1

**(c) **4*x*1 − 4*x*2 + *x*3 = 2

*x*2 + 3*x*3 = 2

−3*x*1 + 3*x*2 − 2*x*3 = 1

- Suppose that you wish to eliminate the last coordinate

of a vector **x **and leave the first *n*−2 coordinates

unchanged. How many operations are necessary if

this is to be done by a Givens transformation *G*? A

Householder transformation *H*? If *A *is an *n*×*n *matrix,

how many operations are required to compute

*GA *and *HA*?

- Let
*H**k*=*I*−2**uu***T*be a Householder transformation

with

**u **= (0, *. . . *, 0, *u**k*, *u**k*+1, *. . . *, *u**n*)*T*

Let **b **∈ R*n *and let *A *be an *n *× *n *matrix. How

many additions and multiplications are necessary

to compute (a) *H**k***b**?; (b) *H**k**A*?

- Let
*Q**T*=*G**n*−*k*· · ·*G*2*G*1, where each*G**i*is a Givens

transformation. Let **b **∈ R*n *and let *A *be an *n *× *n*

matrix. How many additions and multiplications

are necessary to compute (a) *Q**T***b**; (b) *Q**T**A*?

- Let
*R*1 and*R*2 be two 2 × 2 rotation matrices and

let *G*1 and *G*2 be two 2 × 2 Givens transformations.

What type of transformations are each of the

following?

**(a) ***R*1*R*2 **(b) ***G*1*G*2

**(c) ***R*1*G*1 **(d) ***G*1*R*1

- Let
**x**and**y**be distinct vectors in R*n*with**x**2 =

**y**2. Define

**u **= 1

**x **− **y**2

(**x **− **y**) and *Q *= *I *− 2**uu***T*

Show that

**(a) x **− **y**22

= 2(**x **− **y**)*T***x**

**(b) ***Q***x **= **y**

- Let
**u**be a unit vector in R*n*and let

*Q *= *I *− 2**uu***T*

**(a) **Show that **u **is an eigenvector of *Q*.What is the

corresponding eigenvalue?

**(b) **Let **z **be a nonzero vector in R*n *that is orthogonal

to **u**. Show that **z **is an eigenvector of *Q*

belonging to the eigenvalue *λ *= 1.

**(c) **Show that the eigenvalue *λ *= 1 must have

multiplicity *n*−1. What is the value of det(*Q*)?

- Let
*R*be an*n*×*n*plane rotation. What is the

value of det(*R*)? Show that *R *is not an elementary

orthogonal matrix.

- Let
*A*=*Q*1*R*1 =*Q*2*R*2, where*Q*1 and*Q*2 are orthogonal

and *R*1 and *R*2 are both upper triangular and

nonsingular.

**(a) **Show that *Q**T*1

*Q*2 is diagonal.

**(b) **How do *R*1 and *R*2 compare? Explain.

- Let
*A*=**xy***T*, where**x**∈ R*m*,**y**∈ R*n*, and both**x**

and **y **are nonzero vectors. Show that *A *has a singular

value decomposition of the form *H*1*_**H*2, where

*H*1 and *H*2 are Householder transformations and

*σ*1 = **x** **y**, *σ*2 = *σ*3 = · · · = *σ**n *= 0

- Let

*R *=

⎧⎪⎩

cos *θ *−sin *θ*

sin *θ *cos *θ*

⎫⎪⎭

Show that if *θ *is not an integer multiple of *π*, then

*R *can be factored into a product *R *= *ULU*, where

*U *=

⎧⎪⎪⎪⎩

1 cos *θ*−1

sin *θ*

0 1

⎫⎪⎪⎪⎭

and *L *=

⎧⎪⎪⎪⎩

1 0

sin *θ *1

⎫⎪⎪⎪⎭

This type of factorization of a rotation matrix arises

in applications involving wavelets and filter bases.

SECTION 7.6 EXERCISES

- Let

*A *=

⎧⎪⎩

1 1

1 1

⎫⎪⎭

**(a) **Apply one iteration of the power method to *A*

with any nonzero starting vector.

**(b) **Apply one iteration of the QR algorithm

to *A*.

**(c) **Determine the exact eigenvalues of *A *by solving

the characteristic equation, and determine

the eigenspace corresponding to the largest eigenvalue.

Compare your answers with those to

parts (a) and (b).

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

2 1 0

1 3 1

0 1 2

⎫⎪⎪⎪⎪⎪⎭

and **u**0 =

⎧⎪⎪⎪⎪⎪⎩

1

1

1

⎫⎪⎪⎪⎪⎪⎭

**(a) **Apply the power method to *A *to compute **v**1,

**u**1, **v**2, **u**2, and **v**3. (Round off to two decimal

places.)

**(b) **Determine an approximation *λ*

_

1 to the largest

eigenvalue of *A *from the coordinates of **v**3. Determine

the exact value of *λ*1 and compare it

with *λ*

_

- 1. What is the relative error?
- Let

*A *=

⎧⎪⎩

1 2

−1 −1

⎫⎪⎭

and **u**0 =

⎧⎪⎩

1

1

⎫⎪⎭

**(a) **Compute **u**1, **u**2, **u**3, and **u**4, using the power

method.

**(b) **Explain why the power method will fail to

converge in this case.

- Let

*A *= *A*1 =

⎧⎪⎩

1 1

1 3

⎫⎪⎭

Compute *A*2 and *A*3, using the QR algorithm. Compute

the exact eigenvalues of *A *and compare them

with the diagonal elements of *A*3. To how many

decimal places do they agree?

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

5 2 2

−2 1 −2

−3 −4 2

⎫⎪⎪⎪⎪⎪⎭

**(a) **Verify that *λ*1 = 4 is an eigenvalue of *A *and

**y**1

= (2,−2, 1)*T *is an eigenvector belonging

to *λ*1.

**(b) **Find a Householder transformation *H *such that

*HAH *is of the form

⎧⎪⎪⎪⎪⎪⎩

4 × ×

0 × ×

0 × ×

⎫⎪⎪⎪⎪⎪⎭

**(c) **Compute *HAH *and find the remaining eigenvalues

of *A*.

- Let
*A*be an*n*×*n*matrix with distinct real eigenvalues

*λ*1, *λ*2, *. . . *, *λ**n*. Let *λ *be a scalar that is not

an eigenvalue of *A *and let *B *= (*A *− *λ**I*)−1. Show

that

**(a) **the scalars *μ**j *= 1*/*(*λ**j *− *λ*), *j *= 1, *. . . *, *n *are

the eigenvalues of *B*.

**(b) **if **x***j *is an eigenvector of *B *belonging to *μ**j*, then

**x***j *is an eigenvector of *A *belonging to *λ**j*.

**(c) **if the power method is applied to *B*, then the

sequence of vectors will converge to an eigenvector

of *A *belonging to the eigenvalue that is

closest to *λ*. [The convergence will be rapid if *λ*

is much closer to one *λ**i *than to any of the others.

This method of computing eigenvectors by

using powers of (*A*−*λ**I*)−1 is called the *inverse*

*power method*.]

- Let
**x**= (*x*1,*. . .*,*x**n*)*T*be an eigenvector of*A*

belonging to *λ*. Show that if |*x**i*| = **x**∞, then

**(a)**

*n*

*j*=1

*a**ij**x**j *= *λ**x**i*

**(b) **|*λ*−*a**ii*| ≤

*n*

*j*=1

*j*_=*i*

|*a**ij*| (Gerschgorin’s theorem)

- Let
*λ*be an eigenvalue of an*n*×*n*matrix*A*. Show

that for some index *j*,

(column version of

|*λ *− *a**jj*| ≤

*n*

*i*=1

*i*_=*j*

|*a**ij*| Gerschgorin’s

(theorem)

- Let
*A*be a matrix with eigenvalues*λ*1,*. . .*,*λ**n*and

let *λ *be an eigenvalue of *A *+ *E*. Let *X *be a matrix

that diagonalizes *A *and let *C *= *X*−1*EX*. Prove:

**(a) **For some *i*,

|*λ *− *λ**i*| ≤

*n*

*j*=1

|*c**ij*|

[*Hint*: *λ *is an eigenvalue of *X*−1(*A *+ *E*)*X*. Apply

Gerschgorin’s theorem from Exercise 7.]

**(b) **min

1≤*j*≤*n*

|*λ *− *λ**j*| ≤ cond∞(*X*)*E*∞

- Let
*A**k*=*Q**k**R**k*,*k*= 1, 2,*. . .*be the sequence of

matrices derived from *A *= *A*1 by applying the QR

algorithm. For each positive integer *k*, define

*P**k *= *Q*1*Q*2 · · ·*Q**k *and *U**k *= *R**k *· · · *R*2*R*1

Show that

*P**k**A**k*+1 = *AP**k*

for all *k *≥ 1.

- Let
*P**k*and*U**k*be defined as in Exercise 10. Show

that

**(a) ***P**k*+1*U**k*+1 = *P**k**A**k*+1*U**k *= *AP**k**U**k*

**(b) ***P**k**U**k *= *A**k*, and hence

(*Q*1*Q*2 · · ·*Q**k*)(*R**k *· · · *R*2*R*1)

is the QR factorization of *A**k*.

- Let
*R**k*be a*k*×*k*upper triangular matrix and

suppose that

*R**k**U**k *= *U**k**D**k*

where *U**k *is an upper triangular matrix with 1’s on

the diagonal and *D**k *is a diagonal matrix. Let *R**k*+1

be an upper triangular matrix of the form

⎧⎪⎪⎩

*R**k ***b***k*

**0***T **β**k*

⎫⎪⎪⎭

where *β**k *is not an eigenvalue of *R**k*. Determine

(*k*+1)×(*k*+1) matrices *U**k*+1 and *D**k*+1 of the form

*U**k*+1 =

⎧⎪⎪⎩

*U**k ***x***k*

**0***T *1

⎫⎪⎪⎭, *D**k*+1

=

⎧⎪⎪⎩

*D**k ***0**

**0***T **β*

⎫⎪⎪⎭ such that

*R**k*+1*U**k*+1 = *U**k*+1*D**k*+1

- Let
*R*be an*n*×*n*upper triangular matrix whose

diagonal entries are all distinct. Let *R**k *denote the

leading principal submatrix of *R *of order *k *and set

*U*1 = (1).

**(a) **Use the result from Exercise 12 to derive an

algorithm for finding the eigenvectors of *R*.

The matrix *U *of eigenvectors should be upper

triangular with 1’s on the diagonal.

**(b) **Show that the algorithm requires approximately

*n*3

6 floating-point multiplications/

divisions.

SECTION 7.7 EXERCISES

- Find the solution
**x**to the least squares problem,

given that *A *= *QR *in each of the following:

**(a) ***Q *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 √

2

1 √

2

1 √

2

− 1 √

2

0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

,

*R *=

⎧⎪⎩

1 1

0 1

⎫⎪⎭

, **b **=

⎧⎪⎪⎪⎪⎪⎩

1

1

1

⎫⎪⎪⎪⎪⎪⎭

**(b) ***Q *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 0 0

0

1 √

2

− 1 √

2

0

1 √

2

1 √

2

0 0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

,

*R *=

⎧⎪⎪⎪⎪⎪⎩

1 1 0

0 1 1

0 0 1

⎫⎪⎪⎪⎪⎪⎭

, **b **=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

3

1

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(c) ***Q *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 0 0

0

1 √

2

− 1 √

2

0

1 √

2

1 √

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

,

*R *=

⎧⎪⎪⎪⎪⎪⎩

1 1

0 1

0 0

⎫⎪⎪⎪⎪⎪⎭

, **b **=

⎧⎪⎪⎪⎪⎪⎪⎩

√1

2

−

√

2

⎫⎪⎪⎪⎪⎪⎪⎭

**(d) ***Q *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

2

1 √

2

0

1

2

1

2

0

1 √

2

−1

2

1

2

0 − 1 √

2

−1

2

1

2

− 1 √

2

0

1

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

,

*R *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 1 0

0 1 1

0 0 1

0 0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

, **b **=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

2

−2

0

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

- Let

*A *=

⎧⎪⎩

*D*

*E*

⎫⎪⎭

=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪

⎩

*d*1

*d*2

. . .

*d**n*

*e*1

*e*2

. . .

*e**n*

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

and

**b **=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*b*1

*b*2

… *b*2*n*

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

Use the normal equations to find the solution **x **to

the least squares problem.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎩ 1

0

1 3

1 3

1 0

⎫⎪⎪⎪⎪⎪⎪⎭

, **b **=

⎧⎪⎪⎪⎪⎪⎪⎩

−4222

⎫⎪⎪⎪⎪⎪⎪⎭

**(a) **Use Householder transformations to reduce *A*

to the form

⎧⎪⎩

*R*1

*O*

⎫⎪⎭

=

⎧⎪⎪⎪⎪⎪⎪⎪⎩

× ×

0 ×

0 0

0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎭

and apply the same transformations to **b**.

**(b) **Use the results from part (a) to find the least

squares solution of *A***x **= **b**.

- Given

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 5

1 3

1 11

1 5

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

and **b **=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

−1

3

5

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Use Algorithm 5.6.1 to compute the factors

*Q *and *R *of the modified Gram–Schmidt QR

factorization of *A*.

**(b) **Use Algorithm 7.7.1 to compute the least

squares solution to the linear system *A***x **= **b**.

- Let

*A *=

⎧⎪⎪⎪⎪⎩

1 1

*ρ *0

0 *ρ*

⎫⎪⎪⎪⎪⎭

where *ρ *is a small scalar.

**(a) **Determine the singular values of *A *exactly.

**(b) **Suppose that *ρ *is small enough so that

*ρ*2 is less than the machine epsilon. Determine

the eigenvalues of the calculated

*A**T**A *and compare the square roots of

these eigenvalues with your answers in

part (a).

- Show that the pseudoinverse
*A*+ satisfies the four

Penrose conditions.

- Let
*B*be any matrix that satisfies Penrose conditions

1 and 3, and let **x **= *B***b**. Show

that **x **is a solution to the normal equations

*A**T**A***x **= *A**T***b**.

- If
**x**∈ R*m*, we can think of**x**as an*m*× 1 matrix.

If **x **_= **0 **we can then define a 1 × *m *matrix

*X *by

*X *= 1

**x**22

**x***T*

Show that *X *and **x **satisfy the four Penrose conditions

and, consequently, that

**x**+ = *X *= 1

**x**22

**x***T*

- Show that if
*A*is a*m*×*n*matrix of rank*n*, then

*A*+ = (*A**T**A*)−1*A**T *.

- Let
*A*be an*m*×*n*matrix and let**b**∈ R*m*. Show that

**b **∈ *R*(*A*) if and only if

**b **= *AA*+**b**

- Let
*A*be an*m*×*n*matrix with singular value

decomposition *U**_**V**T *, and suppose that *A *has

rank *r*, where *r **< **n*. Let **b **∈ R*m*. Show that

a vector **x **∈ R*n *minimizes **b **− *A***x**2 if and

only if

**x **= *A*+**b **+ *c**r*+1**v***r*+1 +· · ·+*c**n***v***n*

where *c**r*+1, *. . . *, *c**n *are scalars.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

1 1

1 1

0 0

⎫⎪⎪⎪⎪⎪⎭

Determine *A*+ and verify that *A *and *A*+ satisfy

the four Penrose conditions (see Example 1 of

Section 6.5).

- Let

*A *=

⎧⎪⎩

1 2

−1 −2

⎫⎪⎭

and **b **=

⎧⎪⎩

6

−4

⎫⎪⎭

**(a) **Compute the singular value decomposition of

*A *and use it to determine *A*+.

**(b) **Use *A*+ to find a least squares solution to the

system *A***x **= **b**.

**(c) **Find all solutions to the least squares problem

*A***x **= **b**.

- Show each of the following:

**(a) **(*A*+)+ = *A ***(b) **(*AA*+)2 = *AA*+

**(c) **(*A*+*A*)2 = *A*+*A*

- Let
*A*1 =*U**_*1*V**T*and*A*2 =*U**_*2*V**T*, where

*_*1 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*σ*1

. . .

*σ**r*−1

0

. . .

0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

and

*_*2 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

*σ*1

. . .

*σ**r*−1

*σ**r*

0

. . .

0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

and *σ**r *= ρ *> *0. What are the values of *A*1 −*A*2*F*

and *A*+

1

− *A*+

2

*F*? What happens to these values as

we let *ρ *→ 0?

- Let
*A*=*XY**T*, where*X*is an*m*×*r*matrix,*Y**T*

is an *r *× *n *matrix, and *X**T**X *and *Y**T**Y *are both

nonsingular. Show that the matrix

*B *= *Y*(*Y**T**Y*)−1(*X**T**X*)−1*X**T*

satisfies the Penrose conditions and hence must

equal *A*+. Thus *A*+ can be determined from any

factorization of this form.

**MATLAB EXERCISES**

Sensitivity of Linear Systems

*In these exercises, we are concerned with the numerical*

*solution of linear systems of equations. The entries of*

*the coefficient matrix A and the right-hand side ***b ***may*

*often contain small errors due to limitations in the accuracy*

*of the data. Even if there are no errors in either A*

*or ***b***, rounding errors will occur when their entries are*

*translated into the finite-precision number system of the*

*computer. Thus, we generally expect that the coefficient*

*matrix and the right-hand side will involve small errors.*

*The system that the computer solves is then a slightly*

*perturbed version of the original system. If the original*

*system is very sensitive, its solution could differ greatly*

*from the solution of the perturbed system.*

*Generally, a problem is well conditioned if the perturbations*

*in the solutions are on the same order as the*

*perturbations in the data. A problem is ill conditioned if*

*the changes in the solutions are much greater than the*

*changes in the data. How well or ill conditioned a problem*

*is depends on how the size of the perturbations in*

*the solution compares with the size of the perturbations*

*in the data. For linear systems, this, in turn, depends on*

*how close the coefficient matrix is to a matrix of lower*

*rank. The conditioning of a system can be measured using*

*the condition number of the matrix, which can be*

*computed with the MATLAB function ***cond***. MATLAB*

*computations are carried out to 16 significant digits of*

*accuracy. You will lose digits of accuracy depending on*

*how sensitive the system is. If the condition number is*

*expressed using exponential notation, then the greater*

*the exponent, the more digits of accuracy you may lose.*

- Set

*A *= **round**(10 ∗ **rand**(6))

**s **= **ones**(6, 1)

**b **= *A *∗ **s**

The solution of the linear system *A***x **= **b **is clearly

**s**. Solve the system using the MATLAB

Compute the error **x **− **s**. (Since **s **consists entirely

of 1’s, this is the same as **x **− **1**.) Now perturb the

system slightly. Set

*t *= 1.0e−12,

*E *= **rand**(6) − 0.5,

**r **= **rand**(6, 1) − 0.5

and set

*M *= *A *+ *t *∗ *E*, **c **= **b **+ *t *∗ **r**

Solve the perturbed system *M***z **= **c **for **z**. Compare

the solution **z **to the solution of the original system

by computing **z **− **1**. How does the size of the perturbation

in the solution compare with the size of

the perturbations in *A *and **b**? Repeat the perturbation

analysis with *t *= 1.0e−04 and *t *= 1.0e−02. Is

the system *A***x **= **b **well conditioned? Explain. Use

MATLAB to compute the condition number of *A*.

- If a vector
**y**∈ R*n*is used to construct an*n*×*n*

Vandermonde matrix *V*, then *V *will be nonsingular,

provided that *y*1, *y*2, *. . . *, *y**n *are all distinct.

**(a) **Construct a Vandermonde system by setting

**y **= **rand**(6, 1) and *V *= **vander**(**y**)

Generate vectors **b **and **s **in R6 by setting

**b **= **sum**(*V*_)_ and **s **= **ones**(6, 1)

If *V *and **b **had been computed in exact arithmetic,

then the exact solution of *V***x **= **b**

would be **s**. Why? Explain. Solve *V***x **= **b**

using the operation. Compare the computed

solution **x **with the exact solution **s **using the

MATLAB **format long**. How many significant

digits were lost? Determine the condition

number of *V*.

**(b) **The Vandermonde matrices become increasingly

ill conditioned as the dimension *n *increases.

Even for small values of *n *we can make

the matrix ill conditioned by taking two of the

points close together. Set

*x*(2) = *x*(1) + 1.0e−12

and use the new value of *x*(2) to recompute *V*.

For the new matrix *V*, set **b **= **sum**(*V*_)_ and

solve the system *V***z **= **b**. How many digits

of accuracy were lost? Compute the condition

number of *V*.

- Construct a matrix
*C*as follows: Set

*A *= **round**(100 ∗ **rand**(4))

*L *= **tril**(*A*,−1) + **eye**(4)

*C *= *L *∗ *L*_

**(a) **The matrix *C *is a nice matrix in that it is a

symmetric matrix with integer entries and its determinant

is equal to 1. Use MATLAB to verify

these claims. Why do we know ahead of time

that the determinant will equal 1? In theory,

the entries of the exact inverse should all be

integers. Why? Explain. Does this happen computationally?

Compute *D *= **inv**(*C*) and check

its entries using **format long**. Compute *C *∗

*D *and compare it with **eye**(4).

**(b) **Set

**r **= **ones**(4, 1) and **b **= **sum**(*C*_)_

In exact arithmetic the solution to the system

*C***x **= **b **should be **r**. Compute the solution

by using and display the answer in

**format long**. How many digits of accuracy

were lost?We can perturb the system slightly by

taking *e *to be a small scalar, such as 1.0e−12,

and then replacing the right-hand side of the

system by

**b1 **= **b **+ *e *∗ [1,−1, 1,−1]_

Solve the perturbed system first for the case

*e *= 1.0e−12 and then for the case *e *= 10e−06.

In each case, compare your solution **x **with the

original solution by displaying **x **− **1**. Compute

**cond**(*C*). Is *C *ill conditioned? Explain.

- The
*n*×*n*Hilbert matrix*H*is defined by

*h*(*i*, *j*) = 1*/*(*i *+ *j *− 1) *i*, *j *= 1, 2, *. . . *, *n*

It can be generated with the MATLAB function

**hilb**. The Hilbert matrix is notoriously ill conditioned.

It is often used in examples to illustrate

the dangers of matrix computations. The MATLAB

function **invhilb **gives the exact inverse of the

Hilbert matrix. For the cases *n *= 6, 8, 10, 12, construct

*H *and **b **so that *H***x **= **b **is a Hilbert

system whose solution in exact arithmetic should be

**ones**(*n*, 1). In each case, determine the solution **x **of

the system by using **invhilb **and examine **x **with

**format long**. How many digits of accuracy were

lost in each case? Compute the condition number of

each Hilbert matrix. How does the condition number

change as *n *increases?

Sensitivity of Eigenvalues

If *A *is an *n *× *n *matrix and *X *is a matrix that diagonalizes

*A*, then the sensitivity of the eigenvalues of *A*

depends on the condition number of *X*. If *A *is defective,

the condition number for the eigenvalue problem will

be infinite. For more on the sensitivity of eigenvalues,

see Wilkinson [36], Chapter 2.

- Use MATLAB to compute the eigenvalues and eigenvectors

of a random 6×6 matrix *B*. Compute the

condition number of the matrix of eigenvectors. Is

the eigenvalue problem well conditioned? Perturb *B*

slightly by setting

*B*1 = *B *+ 1.0e − 04 ∗ **rand**(6)

Compute the eigenvalues and compare them with the

eigenvalues of *B*.

- Set

*A *= **round**(10 ∗ **rand**(5)); *A *= *A *+ *A*_

[*X*,*D*] = **eig**(*A*)

Compute **cond**(*X*) and *X**T**X*. What type of matrix

is *X*? Is the eigenvalue problem well conditioned?

Explain. Perturb *A *by setting

*A*1 = *A *+ 1.0e−06 ∗ **rand**(5)

Calculate the eigenvalues of *A*1 and compare them

with the eigenvalues of *A*.

- Set
*A*=**magic**(4) and*t*=**trace**(*A*). The scalar

*t *should be an eigenvalue of *A *and the remaining eigenvalues

should add up to zero. Why? Explain. Use

MATLAB to verify that *A *− *tI *is singular. Compute

the eigenvalues of *A *and a matrix *X *of eigenvectors.

Determine the condition numbers of *A *and *X*. Is

the eigenvalue problem well conditioned? Explain.

Perturb *A *by setting

*A*1 = *A *+ 1.0e−04 ∗ **rand**(4)

How do the eigenvalues of *A*1 compare to those

of *A*?

- Set

*A *= **diag**(10 : −1 : 1) + 10 ∗ **diag**(**ones**(1, 9), 1)

[*X*,*D*] = **eig**(*A*)

Compute the condition number of *X*. Is the eigenvalue

problem well conditioned? Ill conditioned?

Explain. Perturb *A *by setting

*A*1 = *A*; *A*1(10, 1) = 0.1

Compute the eigenvalues of *A*1 and compare them to

the eigenvalues of *A*.

- Construct a matrix
*A*as follows:

*A *= **diag**(11 : −1 : 1,−1);

for *j *= 0 : 11

*A *= *A *+ **diag**(12 − *j *: −1 : 1, *j*);

end

**(a) **Compute the eigenvalues of *A *and the value

of the determinant of *A*. Use the MATLAB

function **prod **to compute the product of the

eigenvalues. How does the value of the product

compare with the determinant

**(b) **Compute the eigenvectors of *A *and the condition

number for the eigenvalue problem. Is

the problem well-conditioned? Ill-conditioned?

Explain.

**(c) **Set

*A*1 = *A *+ 1.0e−04 ∗ **rand**(**size**(*A*))

Compute the eigenvalues of *A*1. Compare them

to the eigenvalues of *A *by computing

**sort**(**eig**(*A*1)) − **sort**(**eig**(*A*))

and displaying the result in **format long**.

Householder Transformations

A Householder matrix is an *n *× *n *orthogonal matrix of

the form *I*−1

*b ***vv***T *. For any given nonzero vector **x **∈ R*n*,

it is possible to choose *b *and **v **so that *H***x **will be a

multiple of **e**1.

**(a)**In MATLAB the simplest way to compute a

Householder matrix that zeroes out entries of a

given vector **x**, is to compute the QR factorization

of **x**. Thus, if we are given a vector **x **∈ R*n*,

then the MATLAB command

[*H*, *R*] = **qr**(**x**)

will compute the desired Householder matrix

*H*. Compute a Householder matrix*H*that zeroes

out the last three entries of **e **= **ones**(4, 1).

Set

*C *= [**e**, **rand**(4, 3)]

Compute *H *∗ **e **and *H *∗ *C*.

**(b) **We can also compute the vector **v **and the

scalar *b *that determine the Householder transformation

that zeroes out entries of a given

vector. To do this for a given vector **x**, we

would set

*a *= ((*x*(1) *<*= 0) − (*x*(1) *> *0)) ∗ **norm**(**x**);

**v **= **x**; *v*(1) = *v*(1) − *a*

*b *= *a *∗ (*a *− *x*(1))

Construct **v **and *b *in this way for the vector **e**

from part (a). If *K *= *I *− 1

*b ***vv***T *, then

*K***e **= **e **−

_

**v***T***e**

*b*

_

**v**

Compute both of these quantities with MATLAB

and verify that they are equal. How does

*K***e **compare to *H***e **from part (a)? Compute also

*K *∗ *C *and *C *− **v **∗ ((**v**_ ∗ *C*)*/**b*) and verify that

the two are equal.

- Set

**x1 **= (1 : 5)_; **x2 **= [1, 3, 4, 5, 9]_; **x **= [**x**1; **x**2]

Construct a Householder matrix of the form

*H *=

⎧⎪⎩

*I O*

*O K*

⎫⎪⎭

where *K *is a 5 × 5 Householder matrix that zeroes

out the last four entries of **x2**. Compute the product

*H***x**.

Rotations and Reflections

- To plot
*y*= sin(*x*), we must define vectors of*x*and

*y *values and then use the **plot **command. This can

be done as follows:

**x **= 0 : 0.1 : 6.3; **y **= sin(**x**);

**plot**(**x**, **y**)

**(a) **Let us define a rotation matrix and use it to

rotate the graph of *y *= sin(*x*). Set

*t *= **pi ***/*4; *c *= cos(*t*); *s *= sin(*t*);

*R *= [*c*,−*s*; *s*, *c*]

To find the rotated coordinates, set

*Z *= *R *∗ [**x**; **y**]; **x1 **= *Z*(1, :); **y1 **= *Z*(2, :);

The vectors **x1 **and **y1 **contain the coordinates

for the rotated curve. Set

**w **= [0, 5]; **axis square**

and plot **x1 **and **y1**, using the MATLAB command

**plot**(**x1**, **y1**,**w**,**w**)

By what angles has the graph been rotated and

in what direction?

**(b) **Keep all your variables from part (a) and set

*G *= [*c*, *s*; *s*,−*c*]

The matrix *G *represents a Givens reflection. To

determine the reflected coordinates, set

*Z *= *G *∗ [**x**; **y**];

**x2 **= *Z*(1, :); **y2 **= *Z*(2, :);

Plot the reflected curve, using the MATLAB

command

**plot**(**x2**, **y2**,**w**,**w**)

The curve *y *= sin(*x*) has been reflected about a

line through the origin making an angle of *π/*8

with the *x*-axis. To see this, set

**w1 **=

_

0, 6.3 ∗ cos(*t**/*2)

;

**z1 **=

_

0, 6.3 ∗ sin(*t**/*2)

;

and plot the new axis and both curves with the

MATLAB command

**plot**(**x**, **y**, **x2**, **y2**, **w1**, **z1**)

**(c) **Use the rotation matrix *R *from part (a) to rotate

the curve *y *= −sin(*x*). Plot the rotated

curve. How does the graph compare to that of

the curve from part (b)? Explain.

Singular Value Decomposition

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

4 5 2

4 5 2

0 3 6

0 3 6

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

Enter the matrix *A *in MATLAB and compute its

singular values by setting **s **= **svd**(*A*).

**(a) **How can the entries of **s **be used to determine

the values *A*2 and *A**F*? Compute these

norms by setting

*p *= **norm**(*A*) and *q *= **norm**(*A*, **‘fro’**)

and compare your results with *s*(1) and

**norm**(**s**).

**(b) **To obtain the full singular value decomposition

of *A*, set

[*U*, *D*, *V *] = **svd**(*A*)

Compute the closest matrix of rank 1 to *A *by

setting

*B *= *s*(1) ∗ *U*(:, 1) ∗ *V*(:, 1)_

How are the row vectors of *B *related to the two

distinct row vectors of *A*?

**(c) **The matrices *A *and *B *should have the same

2-norm. Why? Explain. UseMATLAB to compute

*B*2 and *B**F*. In general, for a rank

1 matrix the 2-norm and the Frobenius norm

should be equal. Why? Explain.

- Set

*A *= **round**(10 ∗ **rand**(10, 5)) and **s **= **svd**(*A*)

**(a) **Use MATLAB to compute *A*2, *A**F*, and

cond2(*A*) and compare your results with *s*(1),

**norm**(**s**), *s*(1)*/**s*(5), respectively.

**(b) **Set

[*U*, *D*, *V *] = **svd**(*A*);

*D*(5, 5) = 0;

*B *= *U *∗ *D *∗ *V*_

The matrix *B *should be the closest matrix

of rank 4 to *A *(where distance is measured

in terms of the Frobenius norm). Compute

*A*2 and *B*2. How do these values compare?

Compute and compare the Frobenius norms

of the two matrices. Compute also *A *− *B**F*

and compare the result with *s*(5). Set *r *=

**norm**(*s*(1 : 4)) and compare the result to *B**F*.

**(c) **Use MATLAB to construct a matrix *C *that is

the closest matrix of rank 3 to *A *with respect to

the Frobenius norm. Compute *C*2 and *C**F*.

How do these values compare with the computed

values for *A*2 and *A**F*, respectively?

Set

*p *= **norm**(**s**(1 : 3))

and

*q *= **norm**(**s**(4 : 5))

Compute *C**F *and *A *− *C**F *and compare

your results with *p *and *q*, respectively.

- Set

*A *= **rand**(8, 4) ∗ **rand**(4, 6),

[*U*, *D*, *V *] = **svd**(*A*)

**(a) **What is the rank of *A*? Use the column vectors

of *V *to generate two matrices *V*1 and

*V*2 whose columns form orthonormal bases for

*R*(*A**T*) and *N*(*A*), respectively. Set

*P *= *V*2 ∗ *V*2_,

**r **= *P *∗ **rand**(6, 1),

**w **= *A*_ ∗ **rand**(8, 1)

If **r **and **w **had been computed in exact arithmetic,

they would be orthogonal. Why? Explain.

Use MATLAB to compute **r***T***w**.

**(b) **Use the column vectors of *U *to generate two

matrices *U*1 and *U*2 whose column vectors

form orthonormal bases for *R*(*A*) and *N*(*A**T *),

respectively. Set

*Q *= *U*2 ∗ *U*2_,

**y **= *Q *∗ **rand**(8, 1),

**z **= *A *∗ **rand**(6, 1)

Explain why **y **and **z **would be orthogonal if

all computations were done in exact arithmetic.

Use MATLAB to compute **y***T***z**.

**(c) **Set *X *= **pinv**(*A*). Use MATLAB to verify the

four Penrose conditions

**(i) ***AXA *= *A ***(ii) ***XAX *= *X*

**(iii) **(*AX*)*T *= *AX ***(iv) **(*XA*)*T *= *XA*

**(d) **Compute and compare *AX *and *U*1(*U*1)*T*. Had

all computations been done in exact arithmetic,

the two matrices would be equal. Why?

Explain.

Gerschgorin Circles

- With each
*A*∈ R*n*×*n*we can associate*n*closed circular

disks in the complex plane. The *i*th disk is

centered at *a**ii *and has radius

*r**i *=

*n*

*j*=1

*j*_=*i*

|*a**ij*|

Each eigenvalue of *A *is contained in at least one of

the disks (see Exercise 7 of Section 7.6).

**(a) **Set

*A *= **round**(10 ∗ **rand**(5))

Compute the radii of the Gerschgorin disks of

*A *and store them in a vector **r**. To plot the

disks, we must parameterize the circles. This

can be done by setting

*t *= [0 : 0.1 : 6.3]_ ;

We can then generate two matrices *X *and *Y*

whose columns contain the *x *and *y *coordinates

of the circles. First we initialize *X *and *Y *to zero

by setting

*X *= **zeros**(**length**(*t*), 5); *Y *= *X*;

The matrices can then be generated with the

following commands:

for *i *= 1 : 5

*X*(:, *i*) = *r*(*i*) ∗ cos(*t*) + **real**(*A*(*i*, *i*));

*Y*(:, *i*) = *r*(*i*) ∗ sin(*t*) + **imag**(*A*(*i*, *i*));

end

Set **e **= **eig**(*A*) and plot the eigenvalues

and the disks with the command

**plot**(*X*, *Y*, **real**(**e**), **imag**(**e**), ‘*x*’)

If everything is done correctly, all the eigenvalues

of *A *should lie within the union of the

circular disks.

**(b) **If *k *of the Gerschgorin disks form a connected

domain in the complex plane that is isolated

from the other disks, then exactly *k *of the eigenvalues

of the matrix will lie in that domain.

Set

*B *=[3 0.1 2; 0.1 7 2; 2 2 50];

**(i) **Use the method described in part (a) to

compute and plot the Gerschgorin disks

of *B*.

**(ii) **Since *B *is symmetric, its eigenvalues are

all real and so must all lie on the real

axis.Without computing the eigenvalues,

explain why *B *must have exactly one eigenvalue

in the interval [46, 54]. Multiply

the first two rows of *B *by 0.1 and then

multiply the first two columns by 10. We

can do this in MATLAB by setting

*D *= **diag**([0.1, 0.1, 1])

and

*C *= *D *∗ *B**/**D*

The new matrix *C *should have the same

eigenvalues as *B*. Why? Explain. Use

*C *to find intervals containing the other

two eigenvalues. Compute and plot the

Gerschgorin disks for *C*.

**(iii) **How are the eigenvalues of *C**T *related to

the eigenvalues of *B *and *C*? Compute and

plot the Gerschgorin disks for *C**T*. Use

one of the rows of *C**T *to find an interval

containing the largest eigenvalue of *C**T *.

Distribution of Condition Numbers and

Eigenvalues of Random Matrices

- We can generate a random symmetric 10 × 10

matrix by setting

*A *= **rand**(10); *A *= (*A *+ *A*_)*/*2

Since *A *is symmetric, its eigenvalues are all real.

The number of positive eigenvalues can be calculated

by setting

*y *= **sum**(**eig**(*A*) *> *0)

**(a) **For *j *= 1, 2, *. . . *, 100, generate a random symmetric

10 × 10 matrix and determine the number

of positive eigenvalues. Denote the number

of positive eigenvalues of the *j*th matrix by *y*(*j*).

Set **x **= 0 : 10, and determine the distribution

of the **y **data by setting **n **= **hist**(**y**, **x**). Determine

the mean of the *y*(*j*) values, using the

MATLAB command **mean**(**y**). Use the MATLAB

command **hist**(**y**, **x**) to generate a plot

of the histogram.

**(b) **We can generate a random symmetric 10 × 10

matrix whose entries are in the interval [−1, 1]

by setting

*A *= 2 ∗ **rand**(10) − 1; *A *= (*A *+ *A*_)*/*2

Repeat part (a), using random matrices generated

in this manner. How does the distribution

of the **y **data compare to the one obtained in

part (a)?

- A nonsymmetric matrix
*A*may have complex eigenvalues.

We can determine the number of eigenvalues

of *A *that are both real and positive with the

MATLAB commands

**e **= **eig**(*A*)

*y *= **sum**(**e ***>*0 & **imag **(**e**) == 0)

Generate 100 random nonsymmetric 10 × 10

matrices. For each matrix, determine the number

of positive real eigenvalues and store that number

as an entry of a vector **z**. Determine the mean

of the *z*(*j*) values, and compare it with the mean

computed in part (a) of Exercise 17. Determine the

distribution and plot the histogram.

**(a)**Generate 100 random 5×5 matrices and compute

the condition number of each matrix.

Determine the mean of the condition numbers

and plot the histogram of the distribution.

**(b) **Repeat part (a), using 10 × 10 matrices.

Compare your results with those obtained in

part (a).

**CHAPTER TEST A ****True or False**

In each of the statements that follow, answer *true*

if the statement is always true and *false *otherwise.

In the case of a true statement, explain or prove

your answer. In the case of a false statement, give

an example to show that the statement is not always

true.

- If
*a*,*b*, and*c*are floating-point numbers, then

*f l*(*f l*(*a *+ *b*) + *c*) = *f l*(*a *+ *f l*(*b *+ *c*))

- The computation of
*A*(*BC*) requires the same number

of floating-point operations as the computation

of (*AB*)*C*.

- If
*A*is a nonsingular matrix and a numerically

stable algorithm is used to compute the solution

of a system *A***x **= **b**, then the relative error in the

computed solution will always be small.

- If
*A*is a symmetric matrix and a numerically stable

algorithm is used to compute the eigenvalues of *A*,

then the relative error in the computed eigenvalues

should always be small.

- If
*A*is a nonsymmetric matrix and a numerically

stable algorithm is used to compute the eigenvalues

of *A*, then the relative error in the computed

eigenvalues should always be small.

- If both
*A*−1 and the*LU*factorization of an*n*×*n*matrix

*A *have already been computed, then it is more

efficient to solve a system *A***x **= **b **by multiplying

*A*−1**b**, rather than solving *LU***x **= **b **by forward and

back substitution.

- If
*A*is a symmetric matrix, then*A*1 =*A*∞. - If
*A*is an*m*×*n*matrix, then*A*2 =*A**F*. - If the coefficient matrix
*A*in a least squares problem

has dimensions *m *× *n *and rank *n*, then

the methods of solution discussed in Section 7.7,

namely, the normal equations, the Gram-Schmidt

and Householder QR factorizations, and the singular

value decomposition, will all compute highly

accurate solutions.

- If two
*m*×*n*matrices*A*and*B*are close in the sense

that *A *− *B*2 *< **e *for some small positive number

*e*, then their pseudoinverses will also be close;

that is, *A*+ − *B*+2 *< δ*, for some small positive

number *δ*.

**CHAPTER TEST B**

- Let
*A*and*B*be*n*×*n*matrices and let**x**be a vector in

R*n*. How many scalar additions and multiplications

are required to compute (*AB*)**x **and how many are

necessary to compute *A*(*B***x**)? Which computation is

more efficient?

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎩

2 3 6

4 4 8

1 3 4

⎫⎪⎪⎪⎪⎪⎭

**b **=

⎧⎪⎪⎪⎪⎪⎩

3

0

4

⎫⎪⎪⎪⎪⎪⎭

**c **=

⎧⎪⎪⎪⎪⎪⎩

1

8

2

⎫⎪⎪⎪⎪⎪⎭

**(a) **Use Gaussian elimination with partial pivoting

to solve *A***x **= **b**.

**(b) **Write the permutation matrix *P *that corresponds

to the pivoting strategy in part (a) and determine

the *LU *factorization of *PA*.

**(c) **Use *P*, *L*, and *U *to solve the system *A***x **= **c**.

- Show that if
*Q*is any 4 × 4 orthogonal matrix then

*Q*2 = 1 and *Q**F *= 2.

- Let

*H *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 1

2

1

3

1

4

1

2

1

3

1

4

1

5

1

3

1

4

1

5

1

6

1

4

1

5

1

6

1

7

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

,

*H*−1 =

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

16 −120 240 −140

−120 1200 −2700 1680

240 −2700 6480 −4200

−140 1680 −4200 2800

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

and **b **= (10,−10, 20, 10)*T *.

**(a) **Determine the values of *H*1 and *H*−11.

**(b) **When the system *H***x **= **b **is solved using MATLAB

and the computed solution **x**_ is used to

compute a residual vector **r **= **b **− *H***x**_, it turns

out that **r**1 = 0.36 × 10−11. Use this information

to determine a bound on the relative

error

**x **− **x**_1

**x**1

where **x **is the exact solution of the system.

- Let
*A*be a 10 × 10 matrix with cond∞(*A*) = 5×106.

Suppose that the solution of a system *A***x **= **b **is computed

in 15-digit decimal arithmetic and the relative

residual, **r**∞*/***b**∞, turns out to be approximately

twice the machine epsilon. How many digits of accuracy

would you expect to have in your computed

solution? Explain.

- Let
**x**= (1, 2,−2)*T*.

**(a) **Find a Householder matrix *H *such that *H***x **is a

vector of the form (*r*, 0, 0)*T *.

**(b) **Find a Givens transformation *G *such that *G***x **is

a vector of the form (1, *s*, 0)*T *.

- Let
*Q*be an*n*×*n*orthogonal matrix and let*R*be

an *n *× *n *upper triangular matrix. If *A *= *QR *and

*B *= *RQ*, how are the eigenvalues and eigenvectors

of *A *and *B *related? Explain.

- Let

*A *=

⎧⎪⎩

1 2

4 3

⎫⎪⎭

Estimate the largest eigenvalue of *A *and a corresponding

eigenvector by doing five iterations of the

power method. You may start with any nonzero

vector **u**0.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

5 2 4

5 2 4

3 6 0

3 6 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

and **b **=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

5

1

−1

9

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

The singular value decomposition of *A *is given by

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

1

2

1

2

1

2

1

2

1

2

1

2

−1

2

−1

2

1

2

−1

2

−1

2

1

2

1

2

−1

2

1

2

−1

2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

12 0 0

0 6 0

0 0 0

0 0 0

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

2

3

2

3

1

3

1

3

−2

3

2

3

−2

3

−1

3

2

3

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭

Use the singular value decomposition to find the least squares solution of the system *A***x **= **b **that has the smallest

2-norm.

- Let

*A *=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

1 5

1 5

1 6

1 2

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

, **b **=

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎩

2

4

5

3

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎭

**(a) **Use Householder matrices to transform *A *into

a 4 × 2 upper triangular matrix *R*.

**(b) **Apply the same Householder transformations

to **b**, and then compute the least squares solution

of the system *A***x **= **b**.

** **

** **

## Reviews

There are no reviews yet.