Linear Algebra - Distance,Hyperplanes and Halfspaces,Eigenvalues,Eigenvectors ( Continued 3 )

NPTEL-NOC IITM
16 Aug 201924:13

Summary

TLDRThis lecture concludes a series on linear algebra for data science, focusing on the relationships between eigenvectors and fundamental subspaces. The instructor explains the significance of symmetric matrices, highlighting that they always have real eigenvalues and linearly independent eigenvectors. These concepts are crucial in data science, particularly for covariance matrices and algorithms like principal component analysis (PCA). The lecture connects eigenvectors to null space and column space, providing foundational knowledge for further study in regression analysis and machine learning.

Takeaways

  • 🧮 Symmetric matrices are frequently used in data science, especially in algorithms and covariance matrices.
  • 🔢 The eigenvalues of symmetric matrices are always real, and the corresponding eigenvectors are also real.
  • ♻️ For symmetric matrices, we are guaranteed to have n linearly independent eigenvectors, even if some eigenvalues are repeated.
  • 🔗 Eigenvectors corresponding to zero eigenvalues are found in the null space of the matrix, while those corresponding to non-zero eigenvalues span the column space.
  • 🚫 If a matrix is full rank (none of the eigenvalues are zero), there will be no vectors in the null space.
  • 🧩 The eigenvectors of symmetric matrices that correspond to non-zero eigenvalues form a basis for the column space.
  • 📐 The connection between eigenvectors, null space, and column space is important for data science algorithms like principal component analysis (PCA).
  • 🔍 Eigenvectors of symmetric matrices are linear combinations of the matrix's columns.
  • 📊 Symmetric matrices of the form A^T A or A A^T are frequently encountered in data science computations and always have non-negative eigenvalues.
  • 📚 The lecture series covers essential linear algebra concepts for data science, laying the foundation for further topics in regression analysis and machine learning.

Q & A

  • What happens when a matrix is symmetric?

    -When a matrix is symmetric, its eigenvalues are always real, and it guarantees that there are n linearly independent eigenvectors, even if eigenvalues are repeated.

  • Why are symmetric matrices important in data science?

    -Symmetric matrices are important in data science because they frequently occur in computations, such as the covariance matrix, and they have useful properties like real eigenvalues and guaranteed linearly independent eigenvectors.

  • What is the significance of eigenvalues being real for symmetric matrices?

    -For symmetric matrices, real eigenvalues imply that the corresponding eigenvectors are also real, making the matrix easier to work with in practical applications like data science and machine learning.

  • How are eigenvectors related to the null space when the eigenvalue is zero?

    -Eigenvectors corresponding to eigenvalue zero are in the null space of the matrix. If an eigenvalue is zero, the corresponding eigenvector lies in the null space.

  • What is the connection between eigenvectors and the column space for symmetric matrices?

    -For symmetric matrices, the eigenvectors corresponding to nonzero eigenvalues form a basis for the column space. This means that the column space can be described using these eigenvectors.

  • What role do repeated eigenvalues play in the context of eigenvectors?

    -When eigenvalues are repeated, there may be fewer linearly independent eigenvectors for a general matrix. However, for symmetric matrices, even with repeated eigenvalues, there will still be n linearly independent eigenvectors.

  • How do a transpose A and A transpose matrices relate to symmetric matrices in data science?

    -Both A transpose A and A A transpose are symmetric matrices, which frequently occur in data science computations, such as covariance matrices. Their symmetry guarantees real, non-negative eigenvalues and linearly independent eigenvectors.

  • What does it mean if a matrix has no eigenvalues equal to zero?

    -If a matrix has no eigenvalues equal to zero, it is full rank, meaning there are no vectors in the null space. This implies that all eigenvectors are outside the null space.

  • How are eigenvectors computed for a symmetric matrix with repeated eigenvalues?

    -For a symmetric matrix with repeated eigenvalues, the eigenvectors can still be computed to be linearly independent, ensuring that the matrix has the full set of n independent eigenvectors.

  • What is the importance of the relationship between eigenvalues, null space, and column space in linear algebra?

    -The relationship between eigenvalues, null space, and column space is critical in linear algebra because it helps define the structure of a matrix. Eigenvectors corresponding to zero eigenvalues belong to the null space, while eigenvectors corresponding to nonzero eigenvalues define the column space. These concepts are foundational in data science and machine learning algorithms like PCA.

Outlines

00:00

📊 The Role of Eigenvalues and Eigenvectors in Data Science

This section discusses the importance of eigenvalues and eigenvectors in data science, specifically how they relate to linear algebra concepts. The eigenvalue equation (A - λI = 0) is explained, emphasizing how eigenvalues can be real or complex. However, for symmetric matrices, eigenvalues and their corresponding eigenvectors are always real. Symmetric matrices, like the covariance matrix, play a crucial role in data science. This section also notes that for matrices with distinct eigenvalues, there are guaranteed to be linearly independent eigenvectors, but this is not always true for repeated eigenvalues, unless the matrix is symmetric.

05:01

🔄 Symmetric Matrices and their Special Properties

This paragraph delves deeper into the properties of symmetric matrices, which have real and non-negative eigenvalues. These matrices appear frequently in data science computations, such as AᵀA or AAᵀ. Because these matrices are symmetric, they guarantee the existence of n linearly independent eigenvectors. The relationship between these eigenvectors and the matrix's column and null spaces is introduced, providing a foundation for understanding eigenvalues’ role in determining matrix rank and space interactions.

10:05

🧮 Eigenvalues, Null Spaces, and Full-Rank Matrices

Here, the focus shifts to the relationship between eigenvectors and the null space of a matrix. If an eigenvalue is zero, its corresponding eigenvector lies in the null space. Conversely, eigenvectors with non-zero eigenvalues cannot belong to the null space. This section explains how a matrix with no zero eigenvalues is full rank, implying no vectors exist in its null space. The connection between eigenvalues and the full rank of a matrix is a key point in this analysis.

15:07

📐 Column Space and Eigenvectors of Symmetric Matrices

This paragraph elaborates on the relationship between eigenvectors and the column space in symmetric matrices. If a matrix has r zero eigenvalues, the corresponding eigenvectors span the null space, while the remaining n-r eigenvectors span the column space. Through the rank-nullity theorem, the text explains that the rank of the matrix is n-r, meaning there are n-r independent column vectors. The eigenvectors corresponding to non-zero eigenvalues form a basis for the matrix's column space.

20:08

🔗 Linear Combinations of Eigenvectors and Column Space

In this section, the text explores how eigenvectors corresponding to non-zero eigenvalues are linear combinations of the matrix's columns. It shows that these eigenvectors form a basis for the column space, and each eigenvector can be expressed as a weighted combination of the matrix’s columns. The paragraph demonstrates how these eigenvectors span the column space and help reduce the complexity of the matrix representation by focusing on the independent vectors.

📝 Example of Eigenvalues and Eigenvectors in Action

An example of a 3x3 symmetric matrix is presented to illustrate the theoretical concepts discussed earlier. The example matrix has eigenvalues of 0, 1, and 2, and corresponding eigenvectors are calculated. The vector corresponding to the zero eigenvalue lies in the null space, while the others span the column space. The text verifies these relationships using matrix multiplication, demonstrating how the eigenvectors represent relationships between the matrix’s variables and span the column space.

📚 Conclusion: Key Takeaways on Symmetric Matrices

The final section summarizes the lecture series on linear algebra for data science. It revisits the critical points, such as the fact that symmetric matrices always have real eigenvalues and n linearly independent eigenvectors. The connection between eigenvectors and the null space and column space of a matrix is reiterated, particularly in symmetric matrices. The importance of these concepts in algorithms like principal component analysis (PCA) is highlighted, as well as their broader applications in data science. The lecture concludes by mentioning upcoming modules on statistics and machine learning.

Mindmap

Keywords

💡Eigenvalue

An eigenvalue is a scalar that arises in the context of the eigenvalue equation, which is central to linear algebra. In the video, the eigenvalue is part of the equation A*x = λ*x, where A is a matrix, λ is the eigenvalue, and x is the corresponding eigenvector. Eigenvalues can be real or complex, and the script emphasizes their importance in data science, especially in relation to symmetric matrices where eigenvalues are always real.

💡Eigenvector

An eigenvector is a non-zero vector that only changes by a scalar factor when a linear transformation is applied to it. In the context of the video, eigenvectors are paired with eigenvalues and are key in determining the behavior of matrices in linear algebra. The video explains that for symmetric matrices, eigenvectors are always real and linearly independent, and they play a significant role in defining the column space and null space of matrices.

💡Symmetric Matrix

A symmetric matrix is a square matrix that is equal to its transpose, meaning that the elements are mirrored along the diagonal. In the video, symmetric matrices are highlighted for their special properties, such as always having real eigenvalues and guaranteeing linearly independent eigenvectors. The covariance matrix, commonly used in data science, is an example of a symmetric matrix.

💡Null Space

The null space of a matrix is the set of all vectors that, when multiplied by the matrix, result in the zero vector. The video discusses the relationship between the null space and eigenvectors, particularly how eigenvectors corresponding to a zero eigenvalue are part of the null space. This concept is important for understanding the structure of matrices, especially in relation to their rank.

💡Column Space

The column space of a matrix is the set of all possible linear combinations of its column vectors. In the video, the connection between eigenvectors and the column space is explored, showing that for symmetric matrices, the eigenvectors corresponding to nonzero eigenvalues form a basis for the column space. This is a crucial concept in data science for understanding matrix transformations.

💡Rank

The rank of a matrix is the dimension of its column space, representing the number of linearly independent columns. In the video, rank is discussed in the context of full-rank matrices, where the null space only contains the zero vector. The rank-nullity theorem, which links the rank and null space of a matrix, is also mentioned as a key result in linear algebra.

💡Covariance Matrix

A covariance matrix is a symmetric matrix that represents the covariance between different variables in a dataset. In data science, it is often used in algorithms like Principal Component Analysis (PCA). The video references the covariance matrix as an example of a symmetric matrix with real eigenvalues, highlighting its importance in data analysis.

💡Full Rank Matrix

A full-rank matrix is one where the rank is equal to the number of its columns, implying that all columns are linearly independent. In the video, it is mentioned that when a matrix is full-rank, its null space only contains the zero vector. This property is important in ensuring that systems of linear equations have unique solutions.

💡Principal Component Analysis (PCA)

Principal Component Analysis is a dimensionality reduction technique used in data science that relies on the eigenvectors and eigenvalues of the covariance matrix to identify the directions of maximum variance in the data. The video touches on PCA as an application of the concepts of eigenvalues and eigenvectors, showing how these linear algebra principles are foundational in data science algorithms.

💡Rank-Nullity Theorem

The rank-nullity theorem is a fundamental theorem in linear algebra that states that the rank of a matrix plus the dimension of its null space equals the number of its columns. In the video, this theorem is used to explain the relationship between the rank of a matrix and the number of independent eigenvectors corresponding to nonzero eigenvalues, particularly in symmetric matrices.

Highlights

Introduction to the connection between eigenvectors and fundamental subspaces in linear algebra for data science.

Eigenvalue-eigenvector equation: A - λI = 0, showcasing the polynomial nature and potential for real or complex eigenvalues in general matrices.

Symmetric matrices have special properties in data science, including real eigenvalues and guaranteed real eigenvectors.

Symmetric matrices are common in data science, e.g., covariance matrices, and ensure n linearly independent eigenvectors.

For symmetric matrices, repeated eigenvalues still guarantee independent eigenvectors, unlike general matrices.

The identity matrix serves as an example of a symmetric matrix with repeated eigenvalues but independent eigenvectors.

Matrices of the form AᵀA or AAᵀ, commonly encountered in data science, are always symmetric and have non-negative eigenvalues.

Eigenvectors corresponding to zero eigenvalues are in the null space of matrix A, establishing a key relationship.

When none of the eigenvalues are zero, the matrix is full rank, indicating no non-trivial solutions to Ax = 0.

For symmetric matrices, eigenvectors corresponding to non-zero eigenvalues span the column space of the matrix.

The rank-nullity theorem links the number of non-zero eigenvalues to the rank and nullity of the matrix.

Eigenvectors are linear combinations of the columns of matrix A, and for symmetric matrices, these eigenvectors form a basis for the column space.

In symmetric matrices, the connection between null space, column space, and eigenvectors forms the foundation of data science algorithms.

Practical example: A symmetric 3x3 matrix is used to compute eigenvalues and eigenvectors, verifying theoretical results.

Eigenvectors corresponding to zero eigenvalues can identify relationships between variables in data science problems.

Transcripts

play00:02

[Music]

play00:14

this is the last lecture in the series

play00:18

of lectures on linear algebra for data

play00:21

science and as I mentioned in the last

play00:24

class today I'm going to talk to you

play00:27

about the connections between

play00:28

eigenvectors on the fundamental

play00:30

subspaces that we have described earlier

play00:32

we saw in the last lecture that the

play00:36

eigenvalue eigenvector equation this

play00:40

else in this equation having to be

play00:43

satisfied which is a minus lambda I

play00:46

equals 0 in general

play00:48

we also saw that this would turn out to

play00:51

be a polynomial of degree n in lambda

play00:57

which basically means that even if this

play01:00

matrix a is real because the solutions

play01:05

to a polynomial equation could be either

play01:07

real or complex you could have

play01:09

eigenvalues that are complex so for a

play01:12

general matrix

play01:13

you could have eigenvalues which are

play01:15

either real or complex and notice that

play01:19

since we write the equation ax equals

play01:21

lambda X whenever this eigen values

play01:26

become complex then the eigen vectors

play01:29

are also complex vectors so this is true

play01:35

in general however if the matrix is

play01:40

symmetric and symmetric matrices are of

play01:43

the form equal to a transpose then there

play01:45

are certain nice properties for these

play01:47

matrices which are very useful for us in

play01:50

data science we also encounter symmetric

play01:54

matrices quite a bit in data science for

play01:56

example the covariance matrix turns out

play01:59

to be a symmetric matrix and there are

play02:01

several other cases where we deal with

play02:03

symmetric matrices so these properties

play02:05

of symmetric matrices are very useful

play02:08

for us when we look at algorithms in

play02:10

data science now the first property of

play02:13

symmetric matrices that is very useful

play02:15

to us is if the matrix is symmetric then

play02:18

the eigenvalues are always real so

play02:22

irrespective of what the symmetric

play02:24

matrix is this polynomial would

play02:27

give real solutions for symmetric

play02:31

matrices and as I mentioned before if

play02:34

this turns out to be real then the

play02:37

eigenvectors are also real now there is

play02:42

another aspect of an eigen values and

play02:44

eigenvectors that is important if I have

play02:46

a matrix a and I have n different

play02:51

eigenvalues lambda 1 to lambda and all

play02:54

of them are distinct then I'll

play02:56

definitely have n linearly independent

play02:58

eigenvectors corresponding to them which

play03:00

could be nu 1 nu 2 all the way up to nu

play03:03

n however if there are certain

play03:08

eigenvalues which are repeated so for

play03:10

example if you take a case where I can

play03:12

value lambda 1 is repeated then I could

play03:16

have some polynomial which is like this

play03:22

so the polynomial original polynomial

play03:24

has eigen value lambda 1 repeated twice

play03:27

and then there's another n minus tooth

play03:29

order polynomial which will give you n

play03:31

minus 2 other solutions now in this case

play03:34

when I have lambda 1 repeated like this

play03:39

then it could turn out that this eigen

play03:44

value either has to I can victors which

play03:50

are independent or it might have just

play03:52

one eigenvector so finding n linearly

play03:57

independent eigenvectors is not always

play04:00

guaranteed for any general matrix and we

play04:04

already know that eigenvectors could be

play04:05

complex for any general matrix however

play04:08

when we talk about symmetric matrices we

play04:12

can say for sure that the eigenvalues

play04:15

would be real the eigenvectors would be

play04:18

real further we are always guaranteed

play04:22

that we'll have n linearly independent

play04:24

eigenvectors for symmetric matrices it

play04:26

doesn't matter how many times the

play04:29

eigenvalues get repeated one classic

play04:31

example of a symmetric matrix where

play04:33

eigen values are repeated many times so

play04:36

take identity matrix something like this

play04:39

here

play04:40

this identity matrix has eigen value

play04:43

lambda equal to one which is repeated

play04:45

thrice but it would have three

play04:48

independent eigen vectors 1 0 0 0 1 0

play04:52

and 0 0 so this is a case where I can

play04:57

values repeated tries but there are

play04:59

three independent eigenvectors so this

play05:00

is also an important result that we

play05:03

should keep in mind and as I mentioned

play05:06

in the last slide symmetric matrices

play05:08

have a very important role in data

play05:10

sciences in fact symmetric matrices of

play05:13

the type a transpose a or EA a transpose

play05:16

are often encountered in data sense

play05:19

computations and notice that both of

play05:23

these matrices are symmetric so for

play05:25

example if I take a transpose a

play05:27

transpose this will be a transpose a

play05:30

transpose transpose which will be a

play05:32

transpose a so the transpose of the

play05:34

matrix is the same you can verify that a

play05:37

a transpose is also symmetric through

play05:40

the same idea so we know matrices of the

play05:44

form a transpose a or a a a transpose

play05:46

are both symmetric and they are often

play05:49

encountered when we do computations in

play05:51

data science and we know from the

play05:54

previous slide I had mentioned that the

play05:57

eigenvalues of symmetric matrices are

play05:59

real

play06:00

if the symmetric matrix also takes this

play06:05

form or this form we can also say that

play06:08

while the eigenvalues are real they are

play06:10

also non-negative that is they can they

play06:12

will be either 0 or positive but none of

play06:17

the eigenvalues will be negative so this

play06:18

is another important idea that we will

play06:21

use when we do data science when we look

play06:23

at covariance matrices and so on also

play06:27

the fact that this a transpose a and a

play06:29

transpose are symmetric matrices

play06:31

guarantees that there will be n linearly

play06:33

independent eigenvectors for matrices of

play06:36

this form also so what we are going to

play06:40

do right now is because of the

play06:43

importance of symmetric matrices in data

play06:46

sense computations we are going to look

play06:48

at the connection between the

play06:49

eigenvectors and the column space a null

play06:51

space

play06:53

for a symmetric matrix some of these

play06:55

results translate to non symmetric

play06:57

matrices also but for symmetric matrices

play06:59

all of these are results that we can use

play07:03

so we go back to the eigenvalue

play07:06

eigenvector equation a new is lambda nu

play07:10

and this result that we are going to

play07:16

talk about right now it's true whether

play07:19

the matrix a is symmetric or not

play07:21

if a nu equals lambda nu we ask the

play07:25

question what happens when lambda is

play07:28

zero that is one of the eigenvalues

play07:31

becomes zero so when one of the

play07:33

eigenvalues becomes zero then we have

play07:35

this equation which is a nu equals zero

play07:38

so we can interpret new as an

play07:43

eigenvector corresponding to eigenvalue

play07:45

zero we have also seen this equation

play07:49

before when we talked about different

play07:53

subspaces for matrices

play07:56

we saw that null space vectors are of

play07:59

the form a beta is zero from one of our

play08:03

initial lectures you notice that this

play08:07

and this form are the same so that

play08:09

basically means that nu which is an

play08:13

eigenvector corresponding like

play08:14

corresponding to eigenvalue lambda

play08:17

equals 0 is a null space vector because

play08:23

it is just of the form that we have here

play08:28

so we could say the eigenvectors

play08:33

corresponding to 0 eigen values are in

play08:34

the null space of the original matrix a

play08:38

conversely if the eigenvalue

play08:41

corresponding to an eigen vector is not

play08:43

0 then that eigenvector cannot be in the

play08:46

null space of a so these are important

play08:48

results that we need to know so this is

play08:51

how heigen vectors are connected to null

play08:54

space if none of the eigen values are 0

play08:57

that basically means that the matrix a

play09:01

is full rank and that means that I can

play09:05

never solve

play09:07

a new equal to 0 and get non-trivial new

play09:13

so it's not possible if a is full right

play09:17

so if a is full blank I cannot solve for

play09:20

this and get non-trivial new so whenever

play09:25

lambda is lambda such that there are

play09:28

there is no eigen value that is 0 that

play09:30

means a is full rank matrix that means

play09:32

there is no eigen vector such that a new

play09:34

is 0 which basically means that there

play09:38

are no vectors in the null space now

play09:40

let's see the connection between

play09:42

eigenvectors and column space in this

play09:46

case i'm going to show you the result

play09:48

and this result is valid for symmetric

play09:51

matrices let us assume that i have a

play09:53

symmetric matrix a and the symmetric

play10:04

matrix a we know will have n real

play10:08

eigenvalues let's assume that all of

play10:13

these eigen values are 0 so this are

play10:15

could be 0 also that means there is no

play10:17

eigenvalue which is 0 so even then all

play10:19

of this discussion is valid but as a

play10:21

general case let us assume that our i

play10:24

gain values of 0 so there are our 0

play10:31

eigen values and since we are assuming

play10:35

this matrix is n by n there will be n

play10:39

real eigen values of which are are 0 so

play10:43

there will be n minus R nonzero eigen

play10:48

values and from the previous slide we

play10:53

know that the our eigen vectors

play10:55

corresponding to this are 0 eigen values

play10:59

are all in the null space okay so since

play11:02

I have all 0 eigen values I will have

play11:06

our eigen vectors corresponding to this

play11:12

so all of these are eigen vectors are in

play11:16

the null space which basically means

play11:18

that the dimension of the null space is

play11:20

are because there are our vectors in the

play11:23

null space and from rank nullity theorem

play11:27

we know that rank plus nullity is equal

play11:31

to number of columns in this case n

play11:33

since there are our eigen vectors in the

play11:36

null space nullities are so the rank of

play11:39

the matrix has to be equal to n minus r

play11:43

so that is what we are saying here and

play11:46

further we know that column rank is

play11:49

equal to row rank and since the rank of

play11:51

the matrix is n minus R the column rank

play11:53

also has to be n minus R this basically

play11:58

means that there are n minus R

play12:01

independent vectors in the columns of

play12:05

the matrix so one question that we might

play12:09

ask is the following we could ask what

play12:12

could be a basis set for this column

play12:15

space or what could be the n minus r

play12:18

independent vectors that we can use as

play12:21

the columns subspace so there are a few

play12:24

things that we can notice based on what

play12:27

we have discussed till now first notice

play12:29

that the n minus r eigenvectors that we

play12:32

talked about in the last slide the ones

play12:35

that are not i ghen vectors

play12:37

corresponding to lambda equal to 0 they

play12:40

cannot be in the null space because

play12:42

lambda is a number which is different

play12:44

from 0 so these n minus r eigenvectors

play12:46

cannot be in the null space of the

play12:48

matrix a so let me write again we are

play12:51

discussing all of this for symmetric

play12:53

matrices we know that all of this n

play12:59

minus I are Reagan vectors are also

play13:01

independent because we said irrespective

play13:03

of what the symmetric matrix is we will

play13:05

always get n linearly independent eigen

play13:08

vectors so that means these n minus r

play13:11

eigenvectors are also independent we

play13:16

also know that each of these independent

play13:20

eigenvectors are going to be linear

play13:22

combinations of columns of a to see this

play13:25

let us look at this equation so let me

play13:28

write this out so I could call this as a

play13:31

I am going to expand this nu into

play13:34

new one new - all the way up to new

play13:36

again notice that these are components

play13:39

of new we are just taking one

play13:41

eigenvector new and then these are the

play13:44

end components in that eigenvector I can

play13:48

write this as lambda mu and from the

play13:52

previous lecture of how to do this

play13:54

column multiplication and how to

play13:56

interpret this column multiplication I

play13:58

said you could think of this as nu 1

play14:03

times the first column of a + nu 2 times

play14:07

the second column of a all the way up to

play14:09

Nu n types nth column of a equal to

play14:13

lambda nu now in this equation let me be

play14:17

very clear

play14:18

these are scalars which are components

play14:21

in the eigenvector nu these are column

play14:25

vectors is the first column of a second

play14:26

column of a this is n column of a this

play14:29

is again a scalar lambda which is the

play14:31

eigen value corresponding to Nu so this

play14:34

could be true for any of the N minus R

play14:36

eigen vectors which are not in the null

play14:38

space of this matrix a now take lambda

play14:42

to the other side so you will have this

play14:45

equation as nu is nu 1 by lambda a 1 and

play14:50

so on plus nu n by lambda a again new

play14:56

one is a scalar lambda is a scalar so

play14:58

these are all constants that we are

play15:01

using to multiply these columns now you

play15:04

will clearly see that each of this eigen

play15:06

vectors n minus R eigen vectors are

play15:09

linear combinations of the columns of a

play15:13

so there are n minus R linear in

play15:16

linearly independent eigen vectors like

play15:18

this and each of this are combinations

play15:22

of columns of a and we also know that

play15:26

the dimension of the column space is n

play15:29

minus R in other words if you take all

play15:32

of this columns a 1 to a n these can be

play15:35

represented using just n minus R

play15:40

linearly independent vectors now when we

play15:42

put all of these facts together which is

play15:44

the N minus R eigen vectors are linearly

play15:46

independent

play15:48

there are combinations of columns of a

play15:50

and the number of independent columns of

play15:54

a can be only n minus R so this implies

play15:57

that the eigenvectors corresponding to

play15:59

the nonzero eigenvalues for a symmetric

play16:02

matrix form a basis for the column space

play16:05

so this is the important result that I

play16:07

wanted to show you with all of these

play16:10

ideas now again these results we will

play16:14

see and use as we look at some of the

play16:17

data finds algorithms later so let's

play16:21

take a simple example to understand how

play16:25

all of this works let's consider a

play16:27

matrix which is of this form here it's a

play16:32

three by three matrix first thing that I

play16:34

want you to notice that this is a

play16:36

symmetric matrix so if you do a

play16:38

transpose equals saying

play16:40

and we said symmetric matrices will

play16:44

always have real eigenvalues and when

play16:47

you do the eigen value computation for

play16:49

this the way you do the eigen value

play16:50

computation is you take determinant a

play16:52

minus lambda I equal to zero then you're

play16:55

going to get a third order polynomial

play16:57

you set it equal to zero and then you

play17:00

calculate the three solutions to this

play17:02

polynomial and these would turn out to

play17:04

be the solution 0 1 2 and you take each

play17:09

of these solutions and then substitute

play17:12

it back in and then solve for ax equals

play17:14

lambda X then you would get the three

play17:17

eigen vectors corresponding to this

play17:18

which are given by this this and this

play17:23

notice from our discussion before since

play17:27

this is an eigen vector corresponding to

play17:30

lambda equal to 0 so this is going to be

play17:32

in the null space of this matrix a and

play17:35

these are the remaining two how do I get

play17:38

this 2 which is 3 minus 1 n n by n so

play17:43

it's a 3 by 3 matrix and nullity is 1

play17:47

because there is only one eigen vector

play17:48

corresponding to lambda equal to 0 so I

play17:51

get two other linearly independent

play17:55

vectors and in the last slide when we

play18:00

were discussing the connections

play18:01

claim that these two eigenvectors will

play18:04

be in the column space or in other words

play18:07

what we are claiming is that these three

play18:10

columns can simply be written as a

play18:13

linear combination of these two columns

play18:15

and we are also sure that when we do a

play18:19

times new one this will go to zero so

play18:23

let us verify all of this in the next

play18:25

slide

play18:26

so let's first check eight times new one

play18:28

so this is a matrix I have eight times

play18:30

new one here and you can quite easily

play18:32

see that when you do this computation

play18:35

we'll get this zero zero zero which

play18:38

basically shows that this is the eigen

play18:43

vector corresponding to zero eigen value

play18:45

interestingly in our initial lectures we

play18:48

talked about null space and then we said

play18:51

the null space vector identifies a

play18:53

relationship between variables now since

play18:57

this eigenvector is in the null space

play18:59

the eigen vector corresponding to the

play19:03

eigen vector corresponding to zero eigen

play19:08

value or eigen vectors corresponding to

play19:12

zero eigen values identify the

play19:17

relationships between the variables

play19:19

because these eigen vectors are in the

play19:21

null space of the matrix so it's an

play19:23

interesting connection that we can make

play19:26

so the eigenvectors corresponding to

play19:28

zero eigenvalue can be used to identify

play19:31

relationships among variables now let's

play19:35

do the last thing that we discuss let us

play19:39

now check if the other two eigen vectors

play19:40

shown below so this is for the other two

play19:43

eigen values span the column space so

play19:46

what I've done here is I've taken each

play19:49

one of these columns from matrix a so

play19:51

this is column one so this is a 1 this

play19:54

is a 2 and this is a 3 column 1 is 6

play19:58

times nu 2 column 2 is 8 times nu 2 and

play20:02

column 3 is 2 times nu 3 so we can say

play20:07

that a 1 a 1 a 2 and a 3

play20:14

linear combinations of new to and new T

play20:21

so new to a new tree form a basis for

play20:25

this column space of matrix a so to

play20:29

summarize we have ax equal to lambda X

play20:32

and we largely focused on symmetric

play20:34

matrices in this lecture so we saw that

play20:38

if we have symmetric matrices they have

play20:40

real eigenvalues we also saw that

play20:43

symmetric matrices have n linearly

play20:46

independent eigenvectors we saw the

play20:50

daigon vectors corresponding to zero

play20:51

eigenvalues span the null space of the

play20:54

matrix a and eigenvectors corresponding

play20:58

to nonzero eigenvalues span the column

play21:00

space of a for symmetric matrices that

play21:03

we described in this lecture so with

play21:06

this we have described most of the

play21:10

important fundamental ideas from linear

play21:13

algebra that we will use quite a bit in

play21:17

the material that follows the linear

play21:21

algebra parts will be used in regression

play21:26

analysis which you will see as part of

play21:30

this course and many of these ideas are

play21:33

also useful in algorithms that do

play21:36

classification for example we talked

play21:38

about how spaces and so on and the

play21:41

notion of eigenvalues and eigenvectors

play21:44

are used pretty much in almost every

play21:48

data science algorithm of particular

play21:51

node is one algorithm which is called

play21:55

the principal component analysis where

play21:58

these ideas of connections between null

play22:02

space column space and so on are used

play22:05

quite heavily so I hope that we have

play22:08

given you reasonable understanding of

play22:11

some of the important concepts that you

play22:14

need to learn to understand some of the

play22:17

material that we are going to teach in

play22:19

this course and as I mentioned before

play22:22

linear algebra is a vast topic there are

play22:25

several ideas how do these ideas

play22:28

translate which ones of these are

play22:30

applicable are not applicable for non

play22:33

symmetric matrices and so on and from

play22:36

the previous lectures how do we develop

play22:38

some of those concepts more can be found

play22:42

in many good linear algebra books

play22:46

however our aim here has been to really

play22:50

call out the most important concepts

play22:53

that we are going to use again and again

play22:56

in this first course on data science for

play22:59

engineers more advanced topics in linear

play23:03

algebra will be covered when we teach

play23:07

the next course on machine learning

play23:10

where those concepts might be more

play23:12

useful in advanced machine learning

play23:15

techniques that we will pitch so with

play23:17

this we close the series of lectures on

play23:20

linear algebra and the next set of

play23:23

lectures would be on the use of

play23:26

statistics in data science I thank you

play23:30

and I hope to see you back after you go

play23:35

through the module on statistics which

play23:38

will be taught by my colleague

play23:40

precession connor seaman thank you

play23:44

[Music]

play24:11

[Music]

Rate This

5.0 / 5 (0 votes)

関連タグ
Linear AlgebraData ScienceEigenvectorsEigenvaluesSymmetric MatricesMachine LearningNull SpaceColumn SpacePrincipal ComponentsCovariance Matrix
英語で要約が必要ですか?