daniel suriya
2022-07-19

You can still ask an expert for help

asked 2022-08-21

How do you find the point on the the graph $y=\sqrt{x}$ which is plosest to the point (4,0)?

asked 2022-08-11

How do you find the dimensions that minimize the amount of cardboard used if a cardboard box without a lid is to have a volume of $8,788{\left(cm\right)}^{3}$?

asked 2022-09-23

What are the radius, length and volume of the largest cylindrical package that may be sent using a parcel delivery service that will deliver a package only if the length plus the girth (distance around) does not exceed 108 inches?

asked 2022-08-12

A rectangle is constructed with it's base on the x-axis and the two of its vertices on the parabola $y=49-{x}^{2}$. What are the dimensions of the rectangle with the maximum area?

asked 2022-06-14

The definition of a linear program is following:

Find a vector $x$ such that: $min{c}^{T}x$, subject to $Ax=b$ and $x\ge 0$.

Generally, $b$ is assumed to be a fixed constant. However is it possible to construct a program where values of $b$ are part of the optimization? Could I included b in the optimization by changing $Ax=b$ to $Ax-b=0$. If so, would I also be able to place constraints upon $b$ like $\sum b=1$ and $1>b>0$? Finally, would such a program be possible to solve efficiently?

I am trying to solve the linear program for Wasserstein Distance between two discrete distributions. In the standard case, b represents the marginals for each datapoint. I know the marginals for the target distribution but the marginals from my source distribution are unknown. I am wondering if there is an efficient way to optimize the marginals for my source distribution such that the Wasserstein distance is minimized.

Find a vector $x$ such that: $min{c}^{T}x$, subject to $Ax=b$ and $x\ge 0$.

Generally, $b$ is assumed to be a fixed constant. However is it possible to construct a program where values of $b$ are part of the optimization? Could I included b in the optimization by changing $Ax=b$ to $Ax-b=0$. If so, would I also be able to place constraints upon $b$ like $\sum b=1$ and $1>b>0$? Finally, would such a program be possible to solve efficiently?

I am trying to solve the linear program for Wasserstein Distance between two discrete distributions. In the standard case, b represents the marginals for each datapoint. I know the marginals for the target distribution but the marginals from my source distribution are unknown. I am wondering if there is an efficient way to optimize the marginals for my source distribution such that the Wasserstein distance is minimized.

asked 2022-06-01

Find ${x}_{1},{x}_{2}$ such that $\underset{{x}_{1},{x}_{2}}{min}{x}_{1}^{2}+2{x}_{1}{x}_{2}$ where ${x}_{1},{x}_{2}$ are subject to constraint ${x}_{1}^{2}{x}_{2}\ge 10$.

I have changed the constraint into the equality ${x}_{1}^{2}{x}_{2}-10-{s}^{2}=0$ and attained the gradients which result in 4 equations and 4 unknowns:

$\begin{array}{rl}{x}_{1}^{2}{x}_{2}-10-{s}^{2}& =0\\ 2{x}_{1}+2{x}_{2}& =\lambda (2{x}_{1}{x}_{2})\\ 2{x}_{1}& =\lambda {x}_{1}^{2}\\ 0& =\lambda (-2s)\end{array}$

But I am unsure of how to proceed from here. Additionally, I am struggling to find the dual problem.

I have changed the constraint into the equality ${x}_{1}^{2}{x}_{2}-10-{s}^{2}=0$ and attained the gradients which result in 4 equations and 4 unknowns:

$\begin{array}{rl}{x}_{1}^{2}{x}_{2}-10-{s}^{2}& =0\\ 2{x}_{1}+2{x}_{2}& =\lambda (2{x}_{1}{x}_{2})\\ 2{x}_{1}& =\lambda {x}_{1}^{2}\\ 0& =\lambda (-2s)\end{array}$

But I am unsure of how to proceed from here. Additionally, I am struggling to find the dual problem.

asked 2022-06-24

Suppose that ${p}_{1},\dots ,{p}_{n}$ are nonnegative real numbers such that ${p}_{1}+\cdots +{p}_{n}=1$; denote the corresponding set of vectors by ${\mathrm{\Delta}}_{n}$.

I am interested in the following function, $f:{\mathrm{\Delta}}_{n}\to {\mathbb{R}}_{+}$, given by

$f(p)=\sum _{k=1}^{n}\frac{{p}_{k}}{\sum _{j=k}^{n}{p}_{j}}.$

We always have $f(p)\le n$, using the lower bound $\sum _{j\ge k}{p}_{j}\ge {p}_{k}$. However I feel this must be a loose bound on the quantity

$\underset{p\in {\mathrm{\Delta}}_{n}}{sup}f(p),$

since it requires that $\sum _{j>k}{p}_{j}=0$ for all $k$ to be met with equality. Hence, I am wondering what the largest $f(p)$ can be when evaluated over the simplex?

I am interested in the following function, $f:{\mathrm{\Delta}}_{n}\to {\mathbb{R}}_{+}$, given by

$f(p)=\sum _{k=1}^{n}\frac{{p}_{k}}{\sum _{j=k}^{n}{p}_{j}}.$

We always have $f(p)\le n$, using the lower bound $\sum _{j\ge k}{p}_{j}\ge {p}_{k}$. However I feel this must be a loose bound on the quantity

$\underset{p\in {\mathrm{\Delta}}_{n}}{sup}f(p),$

since it requires that $\sum _{j>k}{p}_{j}=0$ for all $k$ to be met with equality. Hence, I am wondering what the largest $f(p)$ can be when evaluated over the simplex?