learn new task
Optimizing Reusable Knowledge for Continual Learning via Metalearning
When learning tasks over time, artificial neural networks suffer from a problem known as Catastrophic Forgetting (CF). This happens when the weights of a network are overwritten during the training of a new task causing forgetting of old information. To address this issue, we propose MetA Reusable Knowledge or MARK, a new method that fosters weight reusability instead of overwriting when learning a new task. Specifically, MARK keeps a set of shared weights among tasks. We envision these shared weights as a common Knowledge Base (KB) that is not only used to learn new tasks, but also enriched with new knowledge as the model learns new tasks.
Optimizing Reusable Knowledge for Continual Learning via Metalearning
When learning tasks over time, artificial neural networks suffer from a problem known as Catastrophic Forgetting (CF). This happens when the weights of a network are overwritten during the training of a new task causing forgetting of old information. To address this issue, we propose MetA Reusable Knowledge or MARK, a new method that fosters weight reusability instead of overwriting when learning a new task. Specifically, MARK keeps a set of shared weights among tasks. We envision these shared weights as a common Knowledge Base (KB) that is not only used to learn new tasks, but also enriched with new knowledge as the model learns new tasks.
AI uses artificial sleep to learn new task without forgetting the last
Artificial intelligence can learn and remember how to do multiple tasks by mimicking the way sleep helps us cement what we learned during waking hours. "There is a huge trend now to bring ideas from neuroscience and biology to improve existing machine learning – and sleep is one of them" says Maxim Bazhenov at the University of California, San Diego. Many AIs can only master one set of well-defined tasks – they can't acquire additional knowledge later on without losing everything they had previously learned. "The issue pops up if you want to develop systems which are capable of so-called lifelong learning," says Pavel Sanda at the Czech Academy of Sciences in the Czech Republic. Lifelong learning is how humans accumulate knowledge to adapt to and solve future challenges.
- North America > United States > California > San Diego County > San Diego (0.30)
- Europe > Czechia (0.26)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.06)
What Is Google LaMDA & Why Did Someone Believe It's Sentient?
LaMDA has been in the news after a Google engineer claimed it was sentient because its answers allegedly hint that it understands what it is. The engineer also suggested that LaMDA communicates that it has fears, much like a human does. What is LaMDA, and why are some under the impression that it can achieve consciousness? LaMDA is a language model. Fundamentally, it's a mathematical function (or a statistical tool) that describes a possible outcome related to predicting what the next words are in a sequence.
Robotic elephant trunk can learn new tasks on its own
A robotic elephant trunk that uses artificial intelligence to mimic some aspects of brains could lead to snake-like machines that can roam and adapt to new tasks. Sebastian Otte at the University of Tubingen in Germany and his colleagues created a 3D-printed robot trunk from segments that each include several motors driving gears that tilt up to 40 degrees in two axes. The trunk can bend, but also elongate or shorten. The team created a trunk with 10 segments, but they say the length could be doubled with more powerful motors.
Dropout as an Implicit Gating Mechanism For Continual Learning
Mirzadeh, Seyed-Iman, Farajtabar, Mehrdad, Ghasemzadeh, Hassan
In recent years, neural networks have demonstrated an outstanding ability to achieve complex learning tasks across various domains. However, they suffer from the "catastrophic forgetting" problem when they face a sequence of learning tasks, where they forget the old ones as they learn new tasks. This problem is also highly related to the "stability-plasticity dilemma". The more plastic the network, the easier it can learn new tasks, but the faster it also forgets previous ones. Conversely, a stable network cannot learn new tasks as fast as a very plastic network. However, it is more reliable to preserve the knowledge it has learned from the previous tasks. Several solutions have been proposed to overcome the forgetting problem by making the neural network parameters more stable, and some of them have mentioned the significance of dropout in continual learning. However, their relationship has not been sufficiently studied yet. In this paper, we investigate this relationship and show that a stable network with dropout learns a gating mechanism such that for different tasks, different paths of the network are active. Our experiments show that the stability achieved by this implicit gating plays a very critical role in leading to performance comparable to or better than other involved continual learning algorithms to overcome catastrophic forgetting.
OpenAI's AI-powered robot learned how to solve a Rubik's cube one-handed
Artificial intelligence research organization OpenAI has achieved a new milestone in its quest to build general purpose, self-learning robots. The group's robotics division says Dactyl, its humanoid robotic hand first developed last year, has learned to solve a Rubik's cube one-handed. OpenAI sees the feat as a leap forward both for the dexterity of robotic appendages and its own AI software, which allows Dactyl to learn new tasks using virtual simulations before it is presented with a real, physical challenge to overcome. In a demonstration video showcasing Dactyl's new talent, we can see the robotic hand fumble its way toward a complete cube solve with clumsy yet accurate maneuvers. It takes many minutes, but Dactyl is eventually able to solve the puzzle.
OpenAI's AI-powered robot learned how to solve a Rubik's cube one-handed
Artificial intelligence research organization OpenAI has achieved a new milestone in its quest to build general purpose, self-learning robots. The group's robotics division says Dactyl, its humanoid robotic hand first developed last year, has learned to solve a Rubik's cube one-handed. OpenAI sees the feat as a leap forward both for the dexterity of robotic appendages and its own AI software, which allows Dactyl to learn new tasks using virtual simulations before it is presented with a real, physical challenge to overcome. In a demonstration video showcasing Dactyl's new talent, we can see the robotic hand fumble its way toward a complete cube solve with clumsy yet accurate maneuvers. It takes many minutes, but Dactyl is eventually able to solve the puzzle.
Researchers create framework to help artificial intelligence systems be less forgetful WRAL TechWire
"We have proposed a new framework for continual learning, which decouples network structure learning and model parameter learning," says Yingbo Zhou, co-lead author of the paper and a research scientist at Salesforce Research. "We call it the Learn to Grow framework. In experimental testing, we've found that it outperforms previous approaches to continual learning." To understand the Learn to Grow framework, think of deep neural networks as a pipe filled with multiple layers. Raw data goes into the top of the pipe, and task outputs come out the bottom.
6 Areas of AI and Machine Learning to Watch Closely
It's amazing how much progress the field of AI has achieved over the last 10 years, ranging from self-driving cars to speech recognition and synthesis. Against this backdrop, AI has become a topic of conversation in more and more companies and households who have come to see AI as a technology that isn't another 20 years away, but as something that is impacting their lives today. Indeed, the popular press reports on AI almost everyday and technology giants, one by one, articulate their significant long-term AI strategies. While several investors and incumbents are eager to understand how to capture value in this new world, the majority are still scratching their heads to figure out what this all means. Meanwhile, governments are grappling with the implications of automation in society (see Obama's farewell address).
- Information Technology (0.49)
- Leisure & Entertainment > Games > Computer Games (0.48)