Many high dimensional sparse learning problems are formulated as nonconvex
optimization. A popular approach to solve these nonconvex optimization problems
is through convex relaxations such as linear and semidefinite programming. In
this paper, we study the statistical limits of convex relaxations.
Particularly, we consider two problems: Mean estimation for sparse principal
submatrix and edge probability estimation for stochastic block model. We
exploit the sum-of-squares relaxation hierarchy to sharply characterize the
limits of a broad class of convex relaxations. Our result shows statistical
optimality needs to be compromised for achieving computational tractability
using convex relaxations. Compared with existing results on computational lower
bounds for statistical problems, which consider general polynomial-time
algorithms and rely on computational hardness hypotheses on problems like
planted clique detection, our theory focuses on a broad class of convex
relaxations and does not rely on unproven hypotheses.