· Eigen Value Decomposition [Lecture 6.3]
o Works
for square matrices
o X
= UU-1 will
be the eigen value decomposition where U represent the eigen vectors of X and represent diagonal matrix with eigen values
o In
case of orthogonal matrices, UTU = I, hence, X = UT
is EVD of X
·
PCA - [YouTube Mitesh Khapra Unit 6]
o Transforming
the data into a new basis where the new axes have high variance and
o How
to do?
·
Normalize the data to have zero mean and unit
variance
·
Calculate XTX and find its Eigen
vectors. Stacking the Eigen vectors column-wise in matrix P would represent the
transformation matrix for the new axis
(X^ = XP)
·
Estimate the points using only largest k Eigen
vectors to get to an approximation such
that reconstruction error is minimized
·
Largest k Eigen values correspond to the axis
which has highest variance for the data. Throwing away n-k eigen values will be
fine
·
Example: Eigen Faces, Storage of faces
(100x100=10k dimensions) using small dimensions (50-100)
·
SVD Singular value Decomposition [Lecture 6.8]
o Works
for Rectangular matrices - Even harder as they also transform the vector from Rn
to Rm
o X
= UVT where X , U, V .
o Left
Singular Matrix: U are eigen vectors of
XXT
o Right
Singular Matrix: V are eigen vectors of
XTX.
o is a diagonal matrix with k singular values of
X
o Rank-k
approximations of a matrix: is the Rank-1 approximation of matrix X and is the Rank-k
approximation of X. Where k largest singular values are taken
o Singular
values are absolute square root of eigen values
o When
does SVD behave the same as EVD?
·
Pseudo
Inverse
o In
cases of rectangular matrices when inverse is not possible i.e. pseudo-inverse
helps
o Using
SVD, pseudo inverse of a matrix can be easily found, See link below
o https://inst.eecs.berkeley.edu/~ee127/sp21/livebook/def_pseudo_inv.html
Comments
Post a Comment