Suppose that there is a positive definite matrix <mrow class="MJX-TeXAtom-ORD"> <mi mathvar

Suppose that there is a positive definite matrix $\mathbf{A}\in {\mathbb{R}}^{n×n}$, and a vector $\mathbf{b}\in {\mathbb{R}}^{n}$, then minimization of quadratic functions with linear terms can be done in closed form as
$\mathrm{arg}\underset{\mathbf{x}\in {\mathbb{R}}_{n}}{min}\left(\frac{1}{2}{\mathbf{x}}^{\mathsf{T}}\mathbf{A}\mathbf{x}-{\mathbf{b}}^{\mathsf{T}}\mathbf{x}\right)={\mathbf{A}}^{-1}\mathbf{b}$

I met this in a machine learning book. However, the book didn't provide a proof. I wonder why this can be well-formed. Hope that someone can help me with it. I find that many machine learning books like to skip all of the proofs, which made me uncomfortable.
You can still ask an expert for help

• Questions are typically answered in as fast as 30 minutes

Solve your problem for the price of one coffee

• Math expert for every subject
• Pay only if we can solve it

Melina Richard
One way of proving this is to "complete the square":

.Because is positive definite this is never less than , and it attains that value when .