To store your matlab sessions, you can run diary command in
matlab. For the details, type >> help diary at your matlab
session. Once you store the
diary file, you can edit this diary file to remove unnecessary commands
(such as help diary, and any outputs from your help commands) to reduce
the file size.
Please submit the edited diary file via email to our TA, Wei Yu yuwei@math.ucdavis.edu.
[1] Reduced Row Echelon
Forms, Nullspaces, Column spaces, and Solutions of Ax=b:
[1.1] To kick off the project, do Exercise 2.3.1(f) using MATLAB
function "rref". Be familiar
with "rrefmovie" also by
running it. Note that you should form an augmented matrix by putting
the
coefficient matrix A and the right-hand side vector b
together.
[1.2] Determine the bases of column space and nullspace of the matrix
in Exercise 2.3.1(f) using "rref".
Double check whether your answer is correct or not using "null".
[1.3] Compute the rank of the coefficient matrix A in Exercise
2.3.1(f).
[1.4] Do Exercise 3.10.1 using "lu".
[2] Least Squares Methods:
[2.1] Do Exercises 4.6.1, 4.6.5, 4.6.7. You can use the normal
equation to solve these problems.
[3] Inner products and
Norms:
[3.1] Do Exercise 5.1.1. Use explicit computation using the
"sum" and exponents of the
elements, and compare the results with "norm" function. For
example, let x
in R^n. Then, the
euclidean norm of x is sqrt(sum(x.^2)),
which is an explicit computation, where as norm(x) is the convenient one
supplied by MATLAB. For p-norms, please use help norm.
[3.2] Do Exercise 5.2.1. Again use both explicit
computation and the "norm"
function, and verify the results.
[3.3] Do Exercise 5.4.4 and 5.4.5.
To store your matlab sessions, you can run diary command in
matlab. For the details, type >> help diary at your matlab
session. Once you store the
diary file, you can edit this diary file to remove unnecessary commands
(such as help diary, and any outputs from your help commands) to reduce
the file size.
Please submit the edited diary file by naming it as your_name_proj2.txt
and all the M-files (*.m files) of MATLAB functions you created as
attachments of your email to our TA, Wei Yu yuwei@math.ucdavis.edu.
[1] QR Factorization and
Gram-Schmidt Procedures:
[1.1] Implement the QR factorization using the classical Gram-Schmidt
procedure as a MATLAB function clgs, and store it as a M-file, "clgs.m"
in your directory. Exercise 5.5.10 is handy for this purpose. Also,
please read Section 12 of MATLAB
Primer or page 4-5 (the section called Creating M-files) of MATLAB
Tutorial at MIT. Alternatively you can learn how a MATLAB function
should look like by typing, for example,
>> type rank
This gives you an idea on how you should write a function in MATLAB.
[1.2] Implement the QR factorization using the modified Gram-Schmidt
procedure as a MATLAB function and store it as a M-file, "mgs.m" in
your
directory. Exercise 5.5.10 is handy for this purpose too.
[1.3] Type "format long"
first to see more details of your subsequent computational results.
Then
compute QR factorization of the 3x3 matrix of Exercise 5.5.9 using clgs and mgs respectively. Examine the
difference between these two methods by the following two ways: 1)
Qualitative comparison: just type the Q'*Q to see how close this
is to the identity matrix of 3x3; and 2) Quantitative comparison:
compute the "distance" between Q'*Q and the identity matrix I by the
matrix
norm for both clgs
and mgs.
This can be done by typing
>>
norm(Q'*Q-I)
where I is the identity matrix of 3x3, which can be conveniently
obtained by "eye(3)"
in MATLAB rather than typing by hand. Repeat the same by
replacing 10^(-3) by 10^(-7) in this matrix, and see how the errors get
big by using clgs compared
to mgs.
[2] Complementary Subspaces
and Projections:
[2.1] Do Exercise 5.9.1.
[3] Singular Value
Decomposition:
[3.1] Load the image called mandrill.mat, via
>> load
mandrill;
This loads a matrix X containing a face of mandrill, and map which is
its colormap. If you cannot load this data in your MATLAB, then
download this data from this
link. Then, do the load
command
again. Display this matrix on your screen by:
>>
image(X); colormap(map)
[3.2] Compute the SVD of this mandrill image and plot the distribution
of its singular values on your screen.
[3.3] Let σj, uj, vj be a singular value, the left and
right singular vectors of the mandrill image, respectively. Let us
define the rank k
approximation
of the image x as
xk
:= σ1
u1 v'1
+ ... + σk uk v'k, where
v'j is
a transpose of vj.
Then, for k=1,6,11,31, compute xk of
the mandrill, and display the results. Fit these four images in one
page by using subplot function.
[3.4] For k=1,6,11,31, display the residuals, i.e., x-xk.
[4] Comparison of Least
Squares Solutions via Normal Equations, CLGS, MGS, SVD:
[4.1] Solve Exercise 4.6.5 by solving the Normal Equation (you already
did
this in Project #1 [2.1], but please repeat this here).
[4.2] Solve Exercise 4.6.5 using clgs.
[4.3] Solve Exercise 4.6.5 using mgs.
[4.4] Solve Exercise 4.6.5 using svd.
[4.5] Compare these four results by computing the error between the
original data vector y and the
least square estimation a0+a1*t where t=1:8
by
the Euclidean norm, i.e.,
>>
norm(y-(a0+a1*t))
which is the same as
>>
sqrt(sum((y-(a0+a1*t)).^2))
Which method gave you the smallest
error?