here is a question about Compressed sensing. Let us denote the k -sparse signal x &#x220

Rapsinincke

Rapsinincke

Answered question

2022-07-09

here is a question about Compressed sensing. Let us denote the k-sparse signal x R n , measurement/sensing matrix A R m × n and the measurement y = A x R m .
In the noiseless situation, we know that the signal x could be perfectly reconstructed if A satisfies the restricted isometry property. Denote the reconstructed signal as x . Now I am more interested in the reconstructed performance (i.e. mean square error between x and x ), instead of uniqueness or perfect reconstruction. It is intuitive that with more number of measurements m the reconstruction performance would become better. But I could not find references for this idea. Is there any mathematical proof for this relationship?
Could anyone help me? Thanks in advance!

Answer & Explanation

sniokd

sniokd

Beginner2022-07-10Added 22 answers

I can not give you the exact answer since there are lots of mathematical concepts that should be introduced first. But you can find the answer yourself from this book.
By the way, the error really depends on which method you use to construct x , for example l1 minimization, Iterative hard thresholding or Orthogonal Matching Pursuit...... There are lots of method and the errors are different.
Also, we have some constraints on number of measurements m, but sparsity k is more important than m in the error analysis (if I memorize correctly).
In this book, they consider the noisy case, which means y = A x + e. But it doesn't matter, you can simply remove error term and the results still hold.

Do you have a similar question?

Recalculate according to your conditions!

Ask your question.
Get an expert answer.

Let our experts help you. Answer in as fast as 15 minutes.

Didn't find what you were looking for?