Truncated Linear Regression in High Dimensions MIT As in standard linear regression, in truncated linear regression, we are given access to observations (A, y
–Neural Information Processing Systems
As a corollary, our guarantees imply a computationally efficient and information-theoretically optimal algorithm for compressed sensing with truncation, which may arise from measurement saturation effects. Our result follows from a statistical and computational analysis of the Stochastic Gradient Descent (SGD) algorithm for solving a natural adaptation of the LASSO optimization problem that accommodates truncation. This generalizes the works of both: (1) Daskalakis et al. [9], where no regularization is needed due to the lowdimensionality of the data, and (2) Wainright [27], where the objective function is simple due to the absence of truncation. In order to deal with both truncation and high-dimensionality at the same time, we develop new techniques that not only generalize the existing ones but we believe are of independent interest.
Neural Information Processing Systems
May-29-2025, 18:42:58 GMT
- Country:
- North America
- Canada (0.14)
- United States (0.14)
- North America
- Genre:
- Research Report > New Finding (0.35)
- Technology: