How was the conditional probability obtained in this equation? I'm reading about inverse problems a

Dania Mueller

Dania Mueller

Answered question

2022-06-05

How was the conditional probability obtained in this equation?
I'm reading about inverse problems and the Bayesian approach and I'm having troubles understanding how the following equations were obtained.
We consider the following equation initially.
y=G(u)+ η
y is a set of measured data. G is a mathematical model and u are the parameters of the model. The noise that is present in the observed data is given by η (0 mean noise).
The authors then go on to describe the Bayesian approach and they say that the likelihood function, that is the probability of y given u is given by:
ρ ( y | u ) = ρ ( y G ( u ) )
How did they derive this relation above?

Answer & Explanation

crociandomh

crociandomh

Beginner2022-06-06Added 19 answers

Note that since G is known, when u is given so it is G(u). So,
P ( y = y 0 | u = u 0 ) = P ( y = y 0 | G ( u ) = G ( u 0 ) ) = P ( G ( u ) + η = y 0 | G ( u ) = G ( u 0 ) ) = P ( η = y 0 G ( u ) | G ( u ) = G ( u 0 ) ) = P ( η = y 0 G ( u 0 ) )
Setting η 0 = y 0 G ( u 0 ) we get that P ( y = y 0 | u = u 0 ) = P ( η = η 0 ) or in other words:
P ( y | u ) = P ( η )
Substitute η = y G ( u ) by the first equation and you get the authors' claim.

Do you have a similar question?

Recalculate according to your conditions!

Ask your question.
Get an expert answer.

Let our experts help you. Answer in as fast as 15 minutes.

Didn't find what you were looking for?