How do you find the point on the curve $y=2{x}^{2}$ closest to (2,1)?

Lokubovumn
2022-08-13
Answered

How do you find the point on the curve $y=2{x}^{2}$ closest to (2,1)?

You can still ask an expert for help

merneh7

Answered 2022-08-14
Author has **13** answers

Every point on the curve has the form: $(x,2{x}^{2})$

The closest point is the one whose distance is minimum.

The distance between $(x,2{x}^{2})$ and (2,1) is $\sqrt{{(x-2)}^{2}+{(2{x}^{2}-1)}^{2}}$

Because the square root is an increasing function, we can minimize the distance by minimizing:

$f\left(x\right)={(x-2)}^{2}+{(2{x}^{2}-1)}^{2}$

$f\left(x\right)=4{x}^{4}-3{x}^{2}-4x+5$

To minimize f

$f\prime \left(x\right)=16{x}^{3}-6x-4$

Now use whatever tools you have available to solve this cubic equation. (Formula, graphing technology, successive approximation, whatever you have.)

The critical number is approximately 0.824.

Clearly f'(0) is negative and f'(1) is positive, so f(0.824) is a local minimum.

f is a polynomial with only one critical number, so "local" implies "global"

The minimum value of distance occurs at $x\approx 0.824$ and $f\left(0.824\right)\approx 1.358$

The closest point is about (0.824,1.358).

The closest point is the one whose distance is minimum.

The distance between $(x,2{x}^{2})$ and (2,1) is $\sqrt{{(x-2)}^{2}+{(2{x}^{2}-1)}^{2}}$

Because the square root is an increasing function, we can minimize the distance by minimizing:

$f\left(x\right)={(x-2)}^{2}+{(2{x}^{2}-1)}^{2}$

$f\left(x\right)=4{x}^{4}-3{x}^{2}-4x+5$

To minimize f

$f\prime \left(x\right)=16{x}^{3}-6x-4$

Now use whatever tools you have available to solve this cubic equation. (Formula, graphing technology, successive approximation, whatever you have.)

The critical number is approximately 0.824.

Clearly f'(0) is negative and f'(1) is positive, so f(0.824) is a local minimum.

f is a polynomial with only one critical number, so "local" implies "global"

The minimum value of distance occurs at $x\approx 0.824$ and $f\left(0.824\right)\approx 1.358$

The closest point is about (0.824,1.358).

asked 2022-08-12

Let's say I have $480 to fence in a rectangular garden. The fencing for the north and south sides of the garden costs $10 per foot and the fencing for the east and west sides costs $15 per foot. How can I find the dimensions of the largest possible garden.?

asked 2022-08-12

A rectangle is constructed with it's base on the x-axis and the two of its vertices on the parabola $y=49-{x}^{2}$. What are the dimensions of the rectangle with the maximum area?

asked 2022-07-23

How do you find the area of the largest isosceles triangle having a perimeter of 18 meters?

asked 2022-06-23

I have a very simple linear problem:

$\begin{array}{rl}\underset{x}{min}& \text{}{x}^{2}\\ \text{s.t.}& \text{}{a}_{1}{x}_{1}+{a}_{2}{x}_{2}=b\end{array}$

Suppose I want to write this problem equivalently as in Find the equivalent linear program. Unlike the problem in the link, I have equality. Can I write it equivalently as:

$\begin{array}{rl}\underset{x,\alpha ,\beta}{min}& \text{}{x}^{2}\\ \text{s.t.}& \text{}{a}_{1}{x}_{1}=\alpha b,\text{}{a}_{2}{x}_{2}=\beta b,\text{}\alpha +\beta =1.\end{array}$

The converse is intuitive: Given $\{x,\alpha ,\beta \}$ feasible for the second problem, adding the first and second constraints gives the constraint of the first problem. But the forward part is not clear, especially because I have never seen an equality constraint written like this. Any help would be highly appreciated.

$\begin{array}{rl}\underset{x}{min}& \text{}{x}^{2}\\ \text{s.t.}& \text{}{a}_{1}{x}_{1}+{a}_{2}{x}_{2}=b\end{array}$

Suppose I want to write this problem equivalently as in Find the equivalent linear program. Unlike the problem in the link, I have equality. Can I write it equivalently as:

$\begin{array}{rl}\underset{x,\alpha ,\beta}{min}& \text{}{x}^{2}\\ \text{s.t.}& \text{}{a}_{1}{x}_{1}=\alpha b,\text{}{a}_{2}{x}_{2}=\beta b,\text{}\alpha +\beta =1.\end{array}$

The converse is intuitive: Given $\{x,\alpha ,\beta \}$ feasible for the second problem, adding the first and second constraints gives the constraint of the first problem. But the forward part is not clear, especially because I have never seen an equality constraint written like this. Any help would be highly appreciated.

asked 2022-07-19

asked 2022-05-09

Could you help me solve this problem please ?

1. Maximize ${x}^{t}y$ with constraint ${x}^{t}Qx\le 1$ (where $Q$ is definite positive)

What I tried : I tried using KKT but I don't know why I get $-\sqrt{{y}^{t}{Q}^{-1}y}$ as the maximum instead of $\sqrt{{y}^{t}{Q}^{-1}y}$ (which I believe is the maximum). Also, since ${x}^{t}y$ is linear (convex and concave), I don't know how to conclude...

2. Conclude that $({x}^{t}y{)}^{2}\le ({x}^{t}Qx)({y}^{t}{Q}^{-1}y)$$\mathrm{\forall}x,y$ (generalized CS)

1. Maximize ${x}^{t}y$ with constraint ${x}^{t}Qx\le 1$ (where $Q$ is definite positive)

What I tried : I tried using KKT but I don't know why I get $-\sqrt{{y}^{t}{Q}^{-1}y}$ as the maximum instead of $\sqrt{{y}^{t}{Q}^{-1}y}$ (which I believe is the maximum). Also, since ${x}^{t}y$ is linear (convex and concave), I don't know how to conclude...

2. Conclude that $({x}^{t}y{)}^{2}\le ({x}^{t}Qx)({y}^{t}{Q}^{-1}y)$$\mathrm{\forall}x,y$ (generalized CS)

asked 2022-07-10

Suppose that there is a positive definite matrix $\mathbf{A}\in {\mathbb{R}}^{n\times n}$, and a vector $\mathbf{b}\in {\mathbb{R}}^{n}$, then minimization of quadratic functions with linear terms can be done in closed form as

$\mathrm{arg}\underset{\mathbf{x}\in {\mathbb{R}}_{n}}{min}(\frac{1}{2}{\mathbf{x}}^{\mathsf{T}}\mathbf{A}\mathbf{x}-{\mathbf{b}}^{\mathsf{T}}\mathbf{x})={\mathbf{A}}^{-1}\mathbf{b}$

I met this in a machine learning book. However, the book didn't provide a proof. I wonder why this can be well-formed. Hope that someone can help me with it. I find that many machine learning books like to skip all of the proofs, which made me uncomfortable.

$\mathrm{arg}\underset{\mathbf{x}\in {\mathbb{R}}_{n}}{min}(\frac{1}{2}{\mathbf{x}}^{\mathsf{T}}\mathbf{A}\mathbf{x}-{\mathbf{b}}^{\mathsf{T}}\mathbf{x})={\mathbf{A}}^{-1}\mathbf{b}$

I met this in a machine learning book. However, the book didn't provide a proof. I wonder why this can be well-formed. Hope that someone can help me with it. I find that many machine learning books like to skip all of the proofs, which made me uncomfortable.