Thursday, April 6 2017, 3:30pm Room 306, Statistics Building 1130 Miles Lopes University of California, Davis The theory of Compressed Sensing (CS) asserts that an unknown p-dimensional signal can be accurately recovered from an underdetermined set of n linear measurements with n<p, provided that x is sufficiently sparse. However, in applications, the degree of sparsity ||x||_0 is typically unknown, and the problem of directly estimating ||x||_0 has been a longstanding gap between theory and practice. A closely related issue is that ||x||_0 is a highly idealized measure of sparsity, and for real signals with entries not equal to 0, the value ||x||_0=p is not a useful description of compressibility. In our previous work that examined these problems, we considered an alternative measure of “soft” sparsity, and designed a procedure to estimate it without relying on sparsity assumptions. The present work offers a new deconvolution-based method for estimating unknown sparsity, which has wider applicability and sharper theoretical guarantees. Also, we propose an estimator whose relative error converges at a parametric rate, even when p/n diverges. Our main results describe the limiting distribution of the estimator, as well as some connections to Basis Pursuit Denosing, the Lasso, deterministic measurement matrices, and inference problems in CS. (IEEE Transactions on Information Theory (62), 2016 ) http://anson.ucdavis.edu/~melopes/