Robots with truly humanlike dexterity are far from becoming reality, but progress accelerated by AI has brought us closer to achieving this vision than ever before. In a research paper published in September, a team of scientists at Google detailed their tests with a robotic hand that enabled it to rotate Baoding balls with minimal training data. And at a computer vision conference in June, MIT researchers presented their work on an AI model capable of predicting the tactility of physical things from snippets of visual data alone. Now, OpenAI -- the San Francisco-based AI research firm cofounded by Elon Musk and others, with backing from luminaries like LinkedIn cofounder Reid Hoffman and former Y Combinator president Sam Altman -- says it's on the cusp of solving something of a grand challenge in robotics and AI systems: solving a Rubik's cube. Unlike breakthroughs achieved by teams at the University of California, Irvine and elsewhere, which leveraged machines tailor-built to manipulate Rubik's cubes with speed, the approach devised by OpenAI researchers uses a five-fingered humanoid hand guided by an AI model with 13,000 years of cumulative experience -- on the same order of magnitude as the 40,000 years used by OpenAI's Dota-playing bot.
This paper analyzes consumer choices over lunchtime restaurants using data from a sample of several thousand anonymous mobile phone users in the San Francisco Bay Area. The data is used to identify users' approximate typical morning location, as well as their choices of lunchtime restaurants. We build a model where restaurants have latent characteristics (whose distribution may depend on restaurant observables, such as star ratings, food category, and price range), each user has preferences for these latent characteristics, and these preferences are heterogeneous across users. Similarly, each item has latent characteristics that describe users' willingness to travel to the restaurant, and each user has individual-specific preferences for those latent characteristics. Thus, both users' willingness to travel and their base utility for each restaurant vary across user-restaurant pairs. We use a Bayesian approach to estimation. To make the estimation computationally feasible, we rely on variational inference to approximate the posterior distribution, as well as stochastic gradient descent as a computational approach. Our model performs better than more standard competing models such as multinomial logit and nested logit models, in part due to the personalization of the estimates. We analyze how consumers re-allocate their demand after a restaurant closes to nearby restaurants versus more distant restaurants with similar characteristics, and we compare our predictions to actual outcomes. Finally, we show how the model can be used to analyze counterfactual questions such as what type of restaurant would attract the most consumers in a given location.
Many of California's local law enforcement agencies have access to facial recognition software for identifying suspects who appear in crime scene footage, documents obtained through public records requests show. Three California counties also have the capability to run facial recognition searches on each others' mug shot databases, and others could join if they choose to opt into a network maintained by a private law enforcement software company. The network is called California Facial Recognition Interconnect, and it's a service offered by DataWorks Plus, a Greenville, South Carolina–based company with law enforcement contracts in Los Angeles, San Bernardino, San Diego, San Francisco, Sacramento, and Santa Barbara. Currently, the three adjacent counties of Los Angeles, Riverside, and San Bernardino are able to run facial recognition against mug shots in each other's databases. That means these police departments have access to about 11.7 million mug shots of people who have previously been arrested, a majority of which come from the Los Angeles system.
Artificial intelligence is all the rage in healthcare as companies look for tech-driven ways to cut costs and promote patient health. Tech giants like Intel, Google, Amazon, Microsoft and Apple have swooped in to assist payers and providers with their efforts to join the fast-paced environment. Santa Clara, California-based Intel boasts partnerships across myriad sectors in healthcare. For example, earlier this year, not-for-profit integrated health system Sharp HealthCare, which is based in San Diego, used Intel's predictive analytics capabilities to alert its rapid-response team to identify high-risk patients before a health crisis occurred. And currently, Intel is working with pharmaceutical company Novartis on deep neural networks to accelerate content screening in drug discovery.
Among all of the self-driving startups working toward Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based Drive.ai's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars. Last month, we went out to California to take a ride in one of Drive's cars, and to find out how it's using deep learning to master autonomous driving.