# In my syllabus we have the alternative definition of the condition of a matrix: kappa(A)=(text(max)_(norm(vec(y))=1) norm(A vec(y)))/(text(min)_(norm(vec(y))=1) norm(A vec(y))) In it, it also says that by definition of the condition of a matrix it follows that kappa(A^(-1))=kappa(A). So there is no explanation for this. Therefore, my question is: Why is kappa(A^(-1))=kappa(A)?

In my syllabus we have the alternative definition of the condition of a matrix:
$\kappa \left(A\right)=\frac{{\text{max}}_{‖\stackrel{\to }{y}‖=1}‖A\stackrel{\to }{y}‖}{{\text{min}}_{‖\stackrel{\to }{y}‖=1}‖A\stackrel{\to }{y}‖}$
In it, it also says that by definition of the condition of a matrix it follows that $\kappa \left({A}^{-1}\right)=\kappa \left(A\right)$. So there is no explanation for this. Therefore, my question is: Why is $\kappa \left({A}^{-1}\right)=\kappa \left(A\right)$
You can still ask an expert for help

• Questions are typically answered in as fast as 30 minutes

Solve your problem for the price of one coffee

• Math expert for every subject
• Pay only if we can solve it

doraemonjrlf
First you must accept that if ${A}^{-1}$ doesn't exist then by convention $\kappa \left(A\right)=\mathrm{\infty }$
Once you've postulated that ${A}^{-1}$ exists, you want to show
$\underset{‖x‖=1}{max}‖{A}^{-1}x‖={\left(\underset{‖x‖=1}{min}‖Ax‖\right)}^{-1}.$
To see that, take a minimizer ${x}^{\ast }$ on the right side. Let $b=\frac{A{x}^{\ast }}{‖A{x}^{\ast }‖}$; then look at ${A}^{-1}b=\frac{{x}^{\ast }}{‖A{x}^{\ast }‖}$. You picked ${x}^{\ast }$ so that among unit vectors, $‖A{x}^{\ast }‖$ is as small as possible, so $‖{A}^{-1}b‖=\frac{1}{‖A{x}^{\ast }‖}$ is as large as possible among unit vectors b
Intuitively, if A maps ${x}^{\ast }$ to some much smaller vector b, then ${A}^{-1}$ maps b back to ${x}^{\ast }$, which is much bigger than b. Juggling the normalizations obfuscates this intuition a little bit, I think.

ghulamu51
For an invertable matrix, your numerator and denominator are the absolute values of the largest and smallest eigenvalues. The eigenvalues of the inverse matrix are the reciprocals of its eigenvalues.