I have a matrix A with null trace. What is the minimum number of linear measurements that I hav

Cory Patrick 2022-06-25 Answered
I have a matrix A with null trace.
What is the minimum number of linear measurements that I have to perform in order to determine A?
By a linear measurement I mean that I know the quantities A x i h j for given x i and h j
You can still ask an expert for help

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

Solve your problem for the price of one coffee

  • Available 24/7
  • Math expert for every subject
  • Pay only if we can solve it
Ask Question

Answers (1)

Reagan Madden
Answered 2022-06-26 Author has 15 answers
Using unit vectors for both parts of the measurement you have e i T A e j = A i j so if “one measurement” corresponds to one unique pair ( x i , h j ) then each measurement of this form will yield one entry of the matrix. For a generic m × n matrix you'd therefore need m n measurements. Knowing the trace adds one linear constraint, so you can infer one value from all the others, i.e. m n 1 measurements. I can not imagine how a different choice of measurement vectors could do better than this.
On the other hand, where I wrote ( x i , h j ) you had ( x i , h j ). Were you thinking of combining a single x vector with multiple h vectors, or vice versa? If so, you will need m vectors h j and n vectors x i , minus one combination due to the trace. I'll leave it to you to decide what you count as “one measurement” in this case.

We have step-by-step solutions for your answer!

Expert Community at Your Service

  • Live experts 24/7
  • Questions are typically answered in as fast as 30 minutes
  • Personalized clear answers
Learn more

You might be interested in

asked 2022-06-16
Given that k = 1 μ 0 ( S n k ) < μ ( A ) + ϵ 2 n , how do we justify the non-strict inequality k = 1 μ 0 ( S n k ) n = 1 μ ( A n ) + ϵ? Is there anything other to this than if a < b, then necessarily a b?
asked 2022-06-18
I am reading Rudin's RCA and have a question regarding the proof of Jensen's inequality.
Let μ be a positive measure on a σ-algbebra M in a set Ω, so that μ ( Ω ) = 1. If f is a real function in L 1 ( μ ), if a < f ( x ) < b for all x Ω, and if φ is convex on ( a , b ) then
φ ( Ω f d μ ) Ω ( φ f ) d μ .
After a few steps Rudin obtains the following inequality:
φ ( f ( x ) ) φ ( t ) β ( f ( x ) t ) 0
for every x Ω. Here, t = Ω f d μ ( a , b ) and β is a real number. If φ f L 1 ( μ ) then the inequality can be obtained by integrating both sides. However, I am not sure how to proceed in the case φ f L 1 ( μ ). Rudin states that in this case, the integral Ω ( φ f ) d μ is defined in the extended sense
Ω ( φ f ) d μ = Ω ( φ f ) + d μ Ω ( φ f ) d μ
where
Ω ( φ f ) + d μ =  and  Ω ( φ f ) d μ  is finite.
I am unable to show the existence of Ω ( φ f ) d μ as defined above.
Any help would be greatly appreciated. Thank you.
asked 2022-06-15
I have a prediction f ( x ) of some continuous process variable, based on an input variable x (think: location). The prediction is incorrect, with the error being normal distributed with expected value μ and standard deviation σ.
Hence, the probability density function of f ( x ) should be
1 σ 2 π e ( ( f ( x ) μ ) / σ ) 2 2
Is this correct? (No it is not, see answer below)
Now, I have a measurement m of the process variable, based on an unknown input variable x m m is assumed to be correct, but quantized to integral numbers.
Given a set of x i together with their predictions f ( x i ), how can I compute a probability that x m was in the vicinity of x i ?
I apologize if the question makes no sense.
asked 2022-06-24
Problem:
Let { X i } i = 1 be a sequence of random variables on a probability space ( Ω , F , P ) such that lim i X i = X  a.e. a.e. Show that if sup i E ( X i 2 ) < , then E ( X 2 ) < .
My Attempt:
I will try to explain as best I can. First, I have a version of Fatou's Lemma stating that if { X i } i = 1 is a sequence of non-negative random variables, then E ( lim inf i X i ) lim inf i E ( X i ).
If we let Y i = X i 2 then I have a sequence of non-negative random variables to work with. One concern of mine is this: can I assume that lim i X i 2 = X 2 ? I feel like that's necessary for for what I've written below to work.
If I can make that assumption then we have
E ( X 2 ) = E ( lim inf i X i 2 )  (Is this justified?) lim inf i E ( X i 2 )  (application of Fatou's Lemma) sup i E ( X i 2 )  (property of real numbers) <  (by assumption) .
asked 2022-06-15
Let E R n and O i = { x R n d ( x , E ) < 1 i } , i N . Show that if E is compact, then m ( E ) = lim i m ( O i ). Does the statement hold if E is closed, but not bounded and if E is open, but bounded?
What I got was that since i O i = E and O 1 O 2 , then we have that
m ( E ) = m ( i O i ) = lim i m ( O i )
by the continuity of the Lebesgue measure. How is the fact needed that E is compact here?
asked 2022-06-06
Let X and Y be two iid random variables on ( Ω , F , P ). I want to show that X Y has a symmetric distribution. I know that P X ( A ) = P Y ( A ) for every A B ( R ). How can I show that P X Y ( A ) = P Y X ( A ) for every A B ( R )? I know that it should be enough to analyze the family of intervals ( , x ], but I don't see how I can relate P ( Y x ) = P ( X x ) to P ( X Y x ) = P ( Y X x ).
asked 2022-05-15
Consider the following stochastic differential equation:
d Y t = Z t d W t
and terminal condition Y T = b , for which holds: E [ | b | 2 ] < Furthermore b is adapted to the filtration generated by the Browian motion only at terminal time T.
Z t is a predictable square integrable process. So the right hand side Z t d W t is martingale.
Why is the solution Y t adapted to the underlying filtration If I rewrite the equation, I get:
Y t = Y T t T Z s d W s = b t T Z s d W s
Since b is only mb w.r.t to terminal time T, Y t cannot be adapted.
Where did I do a mistake?

New questions