Robots with truly humanlike dexterity are far from becoming reality, but progress accelerated by AI has brought us closer to achieving this vision than ever before. In a research paper published in September, a team of scientists at Google detailed their tests with a robotic hand that enabled it to rotate Baoding balls with minimal training data. And at a computer vision conference in June, MIT researchers presented their work on an AI model capable of predicting the tactility of physical things from snippets of visual data alone. Now, OpenAI -- the San Francisco-based AI research firm cofounded by Elon Musk and others, with backing from luminaries like LinkedIn cofounder Reid Hoffman and former Y Combinator president Sam Altman -- says it's on the cusp of solving something of a grand challenge in robotics and AI systems: solving a Rubik's cube. Unlike breakthroughs achieved by teams at the University of California, Irvine and elsewhere, which leveraged machines tailor-built to manipulate Rubik's cubes with speed, the approach devised by OpenAI researchers uses a five-fingered humanoid hand guided by an AI model with 13,000 years of cumulative experience -- on the same order of magnitude as the 40,000 years used by OpenAI's Dota-playing bot.
This paper analyzes consumer choices over lunchtime restaurants using data from a sample of several thousand anonymous mobile phone users in the San Francisco Bay Area. The data is used to identify users' approximate typical morning location, as well as their choices of lunchtime restaurants. We build a model where restaurants have latent characteristics (whose distribution may depend on restaurant observables, such as star ratings, food category, and price range), each user has preferences for these latent characteristics, and these preferences are heterogeneous across users. Similarly, each item has latent characteristics that describe users' willingness to travel to the restaurant, and each user has individual-specific preferences for those latent characteristics. Thus, both users' willingness to travel and their base utility for each restaurant vary across user-restaurant pairs. We use a Bayesian approach to estimation. To make the estimation computationally feasible, we rely on variational inference to approximate the posterior distribution, as well as stochastic gradient descent as a computational approach. Our model performs better than more standard competing models such as multinomial logit and nested logit models, in part due to the personalization of the estimates. We analyze how consumers re-allocate their demand after a restaurant closes to nearby restaurants versus more distant restaurants with similar characteristics, and we compare our predictions to actual outcomes. Finally, we show how the model can be used to analyze counterfactual questions such as what type of restaurant would attract the most consumers in a given location.
Artificial intelligence is all the rage in healthcare as companies look for tech-driven ways to cut costs and promote patient health. Tech giants like Intel, Google, Amazon, Microsoft and Apple have swooped in to assist payers and providers with their efforts to join the fast-paced environment. Santa Clara, California-based Intel boasts partnerships across myriad sectors in healthcare. For example, earlier this year, not-for-profit integrated health system Sharp HealthCare, which is based in San Diego, used Intel's predictive analytics capabilities to alert its rapid-response team to identify high-risk patients before a health crisis occurred. And currently, Intel is working with pharmaceutical company Novartis on deep neural networks to accelerate content screening in drug discovery.
San Francisco: A team of researchers has used Artificial Intelligence (AI) to turn two-dimensional (2D) images into stacks of virtual three-dimensional (3D) slices showing activity inside organisms. Using deep learning techniques, the team from University of California, Los Angeles (UCLA) devised a technique that extends the capabilities of fluorescence microscopy, which allows scientists to precisely label parts of living cells and tissue with dyes that glow under special lighting. In a study published in the journal Nature Methods, the scientists also reported that their framework, called "Deep-Z," was able to fix errors or aberrations in images, such as when a sample is tilted or curved. Further, they demonstrated that the system could take 2D images from one type of microscope and virtually create 3D images of the sample as if they were obtained by another, more advanced microscope. "This is a very powerful new method that is enabled by deep learning to perform 3D imaging of live specimens, with the least exposure to light, which can be toxic to samples," said senior author Aydogan Ozcan, UCLA chancellor s professor of electrical and computer engineering.
Many of California's local law enforcement agencies have access to facial recognition software for identifying suspects who appear in crime scene footage, documents obtained through public records requests show. Three California counties also have the capability to run facial recognition searches on each others' mug shot databases, and others could join if they choose to opt into a network maintained by a private law enforcement software company. The network is called California Facial Recognition Interconnect, and it's a service offered by DataWorks Plus, a Greenville, South Carolina–based company with law enforcement contracts in Los Angeles, San Bernardino, San Diego, San Francisco, Sacramento, and Santa Barbara. Currently, the three adjacent counties of Los Angeles, Riverside, and San Bernardino are able to run facial recognition against mug shots in each other's databases. That means these police departments have access to about 11.7 million mug shots of people who have previously been arrested, a majority of which come from the Los Angeles system.