Goto

Collaborating Authors

 intractable problem


Sparse Mixed Linear Regression with Guarantees: Taming an Intractable Problem with Invex Relaxation

Barik, Adarsh, Honorio, Jean

arXiv.org Artificial Intelligence

In this paper, we study the problem of sparse mixed linear regression on an unlabeled dataset that is generated from linear measurements from two different regression parameter vectors. Since the data is unlabeled, our task is not only to figure out a good approximation of the regression parameter vectors but also to label the dataset correctly. In its original form, this problem is NP-hard. The most popular algorithms to solve this problem (such as Expectation-Maximization) have a tendency to stuck at local minima. We provide a novel invex relaxation for this intractable problem which leads to a solution with provable theoretical guarantees. This relaxation enables exact recovery of data labels. Furthermore, we recover a close approximation of the regression parameter vectors which match the true parameter vectors in support and sign. Our formulation uses a carefully constructed primal dual witnesses framework for the invex problem. Furthermore, we show that the sample complexity of our method is only logarithmic in terms of the dimension of the regression parameter vectors.


Vivienne Ming: A Force In AI Unlike Any We Have Seen Before

#artificialintelligence

I've had the privilege of meeting, connecting with, and absorbing wisdom from various unique people throughout my career. As part of my 10-Part Series of The 9 Inspirational Women Leaders In AI Shaping The 21st Century, I was honored to have an in-depth conversation with Dr. Vivienne Ming. Dr. Ming explores maximizing human capacity as a theoretical neuroscientist, delusional inventor, and demented author. Over her career, she's founded six startups, been chief scientist at two others, and launched the "mad science incubator," Socos Labs, where she explores seemingly intractable problems--from a lone child's disability to global economic inclusion--for free. As the co-founder and Chief Scientist of Dionysus Health, she applies machine learning to lessen the corrosive health effects of chronic stress in communities. Vivienne Ming, co-founder and chief executive officer of Socos Labs LLC, speaks during the Milken ... [ ] Institute Global Conference in Beverly Hills, California, U.S., on Tuesday, Oct. 19, 2021.


MIT's New Tool for Tackling Hard Computational Problems

#artificialintelligence

Some difficult computation problems, depicted by finding the highest peak in a "landscape" of countless mountain peaks separated by valleys, can take advantage of the Overlap Gap Property: At a high enough "altitude," any two points will be either close or far apart -- but nothing in-between. David Gamarnik has developed a new tool, the Overlap Gap Property, for understanding computational problems that appear intractable. The notion that some computational problems in math and computer science can be hard should come as no surprise. There is, in fact, an entire class of problems deemed impossible to solve algorithmically. Just below this class lie slightly "easier" problems that are less well-understood -- and may be impossible, too.


Complexification of neural networks NOT helping to predict earthquakes

#artificialintelligence

In the last few years, deep learning has solved seemingly intractable problems, boosting the hope to find approximate solutions to problems that now are considered unsolvable. Earthquake prediction, the Grail of Seismology, is, in this context of continuous exciting discoveries, an obvious choice for deep learning exploration. The artificial neural network (ANN) (shallow or deep) is rapidly rising as one of the most powerful go-to techniques not only in data science [LeCun et al., 2015; Jordan and Mitchell, 2016] but also for solving hard and intractable problems of Physics (e.g., many-body problem [Carleo and Troyer, 2017], chaotic systems [Pathak et al., 2018], high-dimensional partial differential equations [Han et al., 2018]). This is justified by the superior performance of ANNs in discovering complex patterns in very large datasets with the advantage of not requiring feature extraction or engineering, as data can be used directly to train the network with potentially great results. It comes as no surprise that machine learning at large -- including ANNs -- has become popular in Statistical Seismology [Kong et al., 2019] and gives fresh hope for earthquake prediction [Rouet-Leduc et al., 2017; DeVries et al., 2018].