Is a matrix multiplied with its transpose something special? In my

Concepcion Hale 2021-12-15 Answered
Is a matrix multiplied with its transpose something special?
In my math lectures, we talked about the Gram-Determinant where a matrix times its transpose are multiplied together.
Is T something special for any matrix A?
You can still ask an expert for help

Want to know more about Matrix transformations?

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

Solve your problem for the price of one coffee

  • Available 24/7
  • Math expert for every subject
  • Pay only if we can solve it
Ask Question

Expert Answer

reinosodairyshm
Answered 2021-12-16 Author has 36 answers
The main thing is presumably that T is symmetric. Indeed (T)T=(AT)TAT=T. For symmetric matrices one has the Spectral Theorem which says that we have a basis of eigenvectors and every eigenvalue is real.
Moreover if A is invertible, then T is also positive definite, since
xTTx=(ATx)T(ATx)>0
Then we have: A matrix is positive definite if and only if its

We have step-by-step solutions for your answer!

Jack Maxson
Answered 2021-12-17 Author has 25 answers

T is positive semi-definite, and in a case in which A is a column matrix, it will be a rank 1 matrix and have only one non-zero eigenvalue which equal to ATA and its corresponding eigenvector is A. The rest of the eigenvectors are the null space of Ai.e.λTA=0.
Indeed, independent of the size of A, there is a useful relation in the eigenvectors of T to the eigenvectors of ATA; based on the property that rank(T)=rank(ATA). That the rank is identical implies that the number of non-zero eigenvectors is identical. Moreover, we can infer the eigenvectors of AT A from AAT and vice versa. The eigenvector decomposition of T is given by Tvi=λivi. In case A is not a square matrix and T is too large to efficiently compute the eigenvectors (like it frequently occurs in covariance matrix computation), then it's easier to compute the eigenvectors of ATA given by ATui=λiui. Pre-multiplying both sides of this equation with A yields
TAui=λiAui.
Now, the originally searched eigenvectors viofT can easily be obtained by vi=Aui. Note, that the resulted eigenvectors are not yet normalized.

We have step-by-step solutions for your answer!

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

You might be interested in

asked 2021-09-13

Assume that A is row equivalent to B. Find bases for Nul A and Col A.
A=[12511324515212045365192]
B=[12045005780000900000]

asked 2021-06-13
For the matrix A below, find a nonzero vector in Nul A and a nonzero vector in Col A.
A=[2359891121431727]
Find a nonzero vector in Nul A.
A=[3201]
asked 2021-09-18

Find an explicit description of Nul A by listing vectors that span the null space.
A=[154310121000000]

asked 2022-01-20
Find matrix of linear transformation
A linear transformation
T:R2R2
is given by
T(i)=i+j
T(j)=2ij
asked 2022-01-20
Matrix of singular transformation
A=(0,1) to A=(0,0), B=(2,0) to B=(0,1) and C=(2,1) to C=(0,2)
find the matrix representation r?
asked 2022-06-15
Let a = ( a 1 , a 2 , a 3 ) be a fixed vector in R 3 . Define the cross product a × v of a and another vector v = ( v 1 , v 2 , v 3 ) R 3 as
a × v = det [ e 1 e 2 e 3 a 1 a 2 a 3 v 1 v 2 v 3 ]
Define a function T : R 3 R 3 by T ( v ) = a × v for v R 3 .
a) Show that T is a matrix transformation and calculate its representing matrix M
b) Find ker ( T ) and interpret its answer geometrically
c) find range ( T ) and interpret its answer geometrically
asked 2022-06-15
Let X be a m × n (m: number of records, and n: number of attributes) normalized dataset (between 0 and 1). Denote Y = X R, where R is an n × p matrix, and p < n. I understand if R was drawn randomly from Gaussian distribution, e.g., N ( 0 , 1 ) then the transformation preserve the Euclidean distances between instances (all of the pairwise distances between the points in the feature space will be preserved). But what if R U ( 0 , 1 ), does the transformation still preserve the distance between instances?

New questions