Kyla Ayers

## Answered question

2022-06-14

The definition of a linear program is following:

Find a vector $x$ such that: $min{c}^{T}x$, subject to $Ax=b$ and $x\ge 0$.

Generally, $b$ is assumed to be a fixed constant. However is it possible to construct a program where values of $b$ are part of the optimization? Could I included b in the optimization by changing $Ax=b$ to $Ax-b=0$. If so, would I also be able to place constraints upon $b$ like $\sum b=1$ and $1>b>0$? Finally, would such a program be possible to solve efficiently?

I am trying to solve the linear program for Wasserstein Distance between two discrete distributions. In the standard case, b represents the marginals for each datapoint. I know the marginals for the target distribution but the marginals from my source distribution are unknown. I am wondering if there is an efficient way to optimize the marginals for my source distribution such that the Wasserstein distance is minimized.

### Answer & Explanation

Arcatuert3u

Beginner2022-06-15Added 30 answers

If you want $b$ to be variable, yes, you can move it to the LHS and change the RHS to 0. An optimization modeling language would perform such transformations on your behalf. If you are instead directly using a solver that requires one constraint matrix and constant vectors for the objective and RHS, you will need to explicitly augment the matrix and vectors to accommodate the new variables.

Do you have a similar question?

Recalculate according to your conditions!

Ask your question.
Get an expert answer.

Let our experts help you. Answer in as fast as 15 minutes.

Didn't find what you were looking for?