A.I. Software Learns a Simple Task Like a Human

AITopics Original Links

Three of the contenders, from left to right: Virginia Tech's THOR, DARPA's test platform robot made by Boston Dynamics and Raytheon's Guardian. Scientists have invented a machine that imitates the way the human brain learns new information, a step forward for artificial intelligence, researchers reported. The system described in the journal Science is a computer model "that captures humans' unique ability to learn new concepts from a single example," the study said. "Though the model is only capable of learning handwritten characters from alphabets, the approach underlying it could be broadened to have applications for other symbol-based systems, like gestures, dance moves, and the words of spoken and signed languages." Joshua Tenenbaum, a professor at the Massachusetts Institute for Technology (MIT), said he wanted to build a machine that could mimic the mental abilities of young children.

Now Artificial intelligence machines can learn as human – Cape Coral Science Centre - Albany Daily Star Gazette


One Day, Robots may take over the world from us, leaving humanity to wonder when artificial intelligence (AI) became too powerful. That horrible scenario is unlikely in the near term because humans have a major advantage over machines: the ability to learn. But that gap between human and robots may decrease slowly in future, Artificial intelligence has capable learning now. Today's most sophisticated AI systems rely on learning from tens to hundreds of examples, whereas humans can learn from a few or even one. Taking inspiration from the way humans seem to learn, scientists have created AI software capable of picking up new knowledge in a far more efficient and sophisticated way.

One-shot learning by inverting a compositional causal process

Neural Information Processing Systems

People can learn a new visual class from just one example, yet machine learning algorithms typically require hundreds or thousands of examples to tackle the same problems. Here we present a Hierarchical Bayesian model based on compositionality and causality that can learn a wide range of natural (although simple) visual concepts, generalizing in human-like ways from just one image. We evaluated performance on a challenging one-shot classification task, where our model achieved a human-level error rate while substantially outperforming two deep learning models. We also used a visual Turing test" to show that our model produces human-like performance on other conceptual tasks, including generating new examples and parsing." Papers published at the Neural Information Processing Systems Conference.

People infer recursive visual concepts from just a few examples

arXiv.org Machine Learning

Machine learning has made major advances in categorizing objects in images, yet the best algorithms miss important aspects of how people learn and think about categories. People can learn richer concepts from fewer examples, including causal models that explain how members of a category are formed. Here, we explore the limits of this human ability to infer causal "programs" -- latent generating processes with nontrivial algorithmic properties -- from one, two, or three visual examples. People were asked to extrapolate the programs in several ways, for both classifying and generating new examples. As a theory of these inductive abilities, we present a Bayesian program learning model that searches the space of programs for the best explanation of the observations. Although variable, people's judgments are broadly consistent with the model and inconsistent with several alternatives, including a pre-trained deep neural network for object recognition, indicating that people can learn and reason with rich algorithmic abstractions from sparse input data.

Computer Learns to Write Its ABCs

AITopics Original Links

A new computer model can now mimic the human ability to learn new concepts from a single example instead of the hundreds or thousands of examples it takes other machine learning techniques, researchers say. The new model learned how to write invented symbols from the animated show Futurama as well as dozens of alphabets from across the world. It also showed it could invent symbols of its own in the style of a given language. The researchers suggest their model could also learn other kinds of concepts, such as speech and gestures. Although scientists have made great advances in machine learning in recent years, people remain much better at learning new concepts than machines.