# Is a matrix multiplied with its transpose something special? In my

Is a matrix multiplied with its transpose something special?
In my math lectures, we talked about the Gram-Determinant where a matrix times its transpose are multiplied together.
Is ${\mathrm{\forall }}^{T}$ something special for any matrix A?
You can still ask an expert for help

## Want to know more about Matrix transformations?

• Questions are typically answered in as fast as 30 minutes

Solve your problem for the price of one coffee

• Math expert for every subject
• Pay only if we can solve it

reinosodairyshm
The main thing is presumably that ${\mathrm{\forall }}^{T}$ is symmetric. Indeed ${\left({\mathrm{\forall }}^{T}\right)}^{T}={\left({A}^{T}\right)}^{T}{A}^{T}={\mathrm{\forall }}^{T}$. For symmetric matrices one has the Spectral Theorem which says that we have a basis of eigenvectors and every eigenvalue is real.
Moreover if A is invertible, then ${\mathrm{\forall }}^{T}$ is also positive definite, since
${x}^{T}{\mathrm{\forall }}^{T}x={\left({A}^{T}x\right)}^{T}\left({A}^{T}x\right)>0$
Then we have: A matrix is positive definite if and only if its

Jack Maxson

${\mathrm{\forall }}^{T}$ is positive semi-definite, and in a case in which A is a column matrix, it will be a rank 1 matrix and have only one non-zero eigenvalue which equal to ${A}^{T}A$ and its corresponding eigenvector is A. The rest of the eigenvectors are the null space of $Ai.e.{\lambda }^{T}A=0$.
Indeed, independent of the size of A, there is a useful relation in the eigenvectors of ${\mathrm{\forall }}^{T}$ to the eigenvectors of ${A}^{T}A;$ based on the property that rank$\left({\mathrm{\forall }}^{T}\right)=rank\left({A}^{T}A\right)$. That the rank is identical implies that the number of non-zero eigenvectors is identical. Moreover, we can infer the eigenvectors of ${A}^{T}$ A from $A{A}^{T}$ and vice versa. The eigenvector decomposition of ${\mathrm{\forall }}^{T}$ is given by ${\mathrm{\forall }}^{T}{v}_{i}={\lambda }_{i}{v}_{i}$. In case A is not a square matrix and ${\mathrm{\forall }}^{T}$ is too large to efficiently compute the eigenvectors (like it frequently occurs in covariance matrix computation), then it's easier to compute the eigenvectors of ${A}^{T}A$ given by ${A}^{T}ui={\lambda }_{i}{u}_{i}$. Pre-multiplying both sides of this equation with A yields
${\mathrm{\forall }}^{T}A{u}_{i}={\lambda }_{i}A{u}_{i}$.
Now, the originally searched eigenvectors ${v}_{i}of{\mathrm{\forall }}^{T}$ can easily be obtained by ${v}_{i}\phantom{\rule{0.222em}{0ex}}=A{u}_{i}$. Note, that the resulted eigenvectors are not yet normalized.