I am a computer science research student working in application of Machine Learning to solve Computer Vision problems.
Since, lot of linear algebra(eigenvalues, SVD etc.) comes up when reading Machine Learning/Vision literature, I decided to take a linear algebra course this semester.
Much to my surprise, the course didn't look at all like Gilbert Strang's Applied Linear algebra(on OCW) I had started taking earlier. The course textbook is Linear Algebra by Hoffman and Kunze. We started with concepts of Abstract algebra like groups, fields, rings, isomorphism, quotient groups etc. And then moved on to study "theoretical" linear algebra over finite fields, where we cover proofs for important theorms/lemmas in the following topics:
Vector spaces, linear span, linear independence, existence of basis. Linear transformations. Solutions of linear equations, row reduced echelon form, complete echelon form,rank. Minimal polynomial of a linear transformation. Jordan canonical form. Determinants. Characteristic polynomial, eigenvalues and eigenvectors. Inner product space. Gram Schmidt orthogonalization. Unitary and Hermitian transformations. Diagonalization of Hermitian transformations.
I wanted to understand if there is any significance/application of understanding these proofs in machine learning/computer vision research or should I be better off focusing on the applied Linear Algebra?