artificial general intelligence


AI vs. Machine Learning vs. Deep Learning 7wData

#artificialintelligence

Since before the dawn of the computer age, scientists have been captivated by the idea of creating machines that could behave like humans. But only in the last decade has technology enabled some forms of artificial intelligence (AI) to become a reality. Interest in putting AI to work has skyrocketed, with burgeoning array of AI use cases. Many surveys have found upwards of 90 percent of enterprises are either already using AI in their operations today or plan to in the near future. Eager to capitalize on this trend, software vendors – both established AI companies and AI startups – have rushed to bring AI capabilities to market.


Less Like Us: An Alternate Theory of Artificial General Intelligence

#artificialintelligence

The question of whether an artificial general intelligence will be developed in the future--and, if so, when it might arrive--is controversial. One (very uncertain) estimate suggests 2070 might be the earliest we could expect to see such technology. Some futurists point to Moore's Law and the increasing capacity of machine learning algorithms to suggest that a more general breakthrough is just around the corner. Others suggest that extrapolating exponential improvements in hardware is unwise, and that creating narrow algorithms that can beat humans at specialized tasks brings us no closer to a "general intelligence." But evolution has produced minds like the human mind at least once.


Unpredictability of AI

arXiv.org Artificial Intelligence

With increase in capabilities of artificial intelligence, over the last decade, a significant number of researchers have realized importance in creating not only capable intelligent systems, but also making them safe and secure [1-6]. Unfortunately, the field of AI Safety is very young, and researchers are still working to identify its main challenges and limitations. Impossibility results are well known in many fields of inquiry [7-13], and some have now been identified in AI Safety [14-16]. In this paper, we concentrate on a poorly understood concept of unpredictability of intelligent systems [17], which limits our ability to understand impact of intelligent systems we are developing and is a challenge for software verification and intelligent system control, as well as AI Safety in general. In theoretical computer science and in software development in general, many well-known impossibility results are well established, some of them are strongly related to the subject of this paper, for example: Rice's Theorem states that no computationally effective method can decide if a program will exhibit a particular nontrivial behavior, such as producing a specific output [18].


What Is Artificial Intelligence (AI)?

#artificialintelligence

In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans. But true artificial intelligence, as McCarthy conceived it, continues to elude us.


AI vs. Machine Learning vs. Deep Learning

#artificialintelligence

Since before the dawn of the computer age, scientists have been captivated by the idea of creating machines that could behave like humans. But only in the last decade has technology enabled some forms of artificial intelligence (AI) to become a reality. Interest in putting AI to work has skyrocketed, with burgeoning array of AI use cases. Many surveys have found upwards of 90 percent of enterprises are either already using AI in their operations today or plan to in the near future. Eager to capitalize on this trend, software vendors – both established AI companies and AI startups – have rushed to bring AI capabilities to market.


Tomorrow's 'general' AI revolution will grow from today's technology

#artificialintelligence

During his closing remarks at the I/O 2019 keynote last week, Jeff Dean, Google AI's lead, noted that the company is looking at "AI that can work across disciplines," suggesting the Silicon Valley giant may soon pursue artificial general intelligence, a technology that eventually could match or exceed human intellect. In today's pop culture, machines with artificial general intelligence (AGI) are typically portrayed as walking, talking human analogs replete with personalities -- from the Terminator's murderous intent to Vision's noble heroism. In reality, self-aware robots are a long way off. Nathan Michael, associate research professor and the director of the Resilient Intelligent Systems Lab at Carnegie Mellon University, argues that generalized AI systems will grow out from today's single-purpose "narrow" AIs. "General AI is representative of this concept of bringing together many different kinds of specialized AI," he explained.


All that glitters is not quantum AI

ZDNet

Why hasn't the field of artificial intelligence created the equivalent of human intelligence? Is it because the problem, "artificial general intelligence," isn't well understood, or is because we just need much faster computers, specifically quantum computers? The latter view is the source of a vibrant field of research, "Quantum Machine Learning," or QML. But a bit of skepticism is warranted. "We need to look through a skeptical eye at the idea that quantum makes things faster and therefore can make machine learning advances," says Jennifer Fernick, the head of engineering at NCC Group, a cyber-security firm based in Manchester, U.K. Fernick was a keynote speaker a week ago at the O'Reilly A.I. conference in New York.


Towards a framework for the evolution of artificial general intelligence

arXiv.org Artificial Intelligence

In this work, a novel framework for the emergence of general intelligence is proposed, where agents evolve through environmental rewards and learn throughout their lifetime without supervision, i.e., self-supervised learning through embodiment. The chosen control mechanism for agents is a biologically plausible neuron model based on spiking neural networks. Network topologies become more complex through evolution, i.e., the topology is not fixed, while the synaptic weights of the networks cannot be inherited, i.e., newborn brains are not trained and have no innate knowledge of the environment. What is subject to the evolutionary process is the network topology, the type of neurons, and the type of learning. This process ensures that controllers that are passed through the generations have the intrinsic ability to learn and adapt during their lifetime in mutable environments. We envision that the described approach may lead to the emergence of the simplest form of artificial general intelligence.


DNN Architecture for High Performance Prediction on Natural Videos Loses Submodule's Ability to Learn Discrete-World Dataset

arXiv.org Machine Learning

Is cognition a collection of loosely connected functions tuned to different tasks, or can there be a general learning algorithm? If such an hypothetical general algorithm did exist, tuned to our world, could it adapt seamlessly to a world with different laws of nature? We consider the theory that predictive coding is such a general rule, and falsify it for one specific neural architecture known for high-performance predictions on natural videos and replication of human visual illusions: PredNet. Our results show that PredNet's high performance generalizes without retraining on a completely different natural video dataset. Yet PredNet cannot be trained to reach even mediocre accuracy on an artificial video dataset created with the rules of the Game of Life (GoL). We also find that a submodule of PredNet, a Convolutional Neural Network trained alone, reaches perfect accuracy on the GoL while being mediocre for natural videos, showing that PredNet's architecture itself is responsible for both the high performance on natural videos and the loss of performance on the GoL. Just as humans cannot predict the dynamics of the GoL, our results suggest that there might be a trade-off between high performance on sensory inputs with different sets of rules.


What Would It Mean for AI to Become Conscious?

#artificialintelligence

As artificial intelligence systems take on more tasks and solve more problems, it's hard to say which is rising faster: our interest in them or our fear of them. Futurist Ray Kurzweil famously predicted that "By 2029, computers will have emotional intelligence and be convincing as people." We don't know how accurate this prediction will turn out to be. Even if it takes more than 10 years, though, is it really possible for machines to become conscious? If the machines Kurzweil describes say they're conscious, does that mean they actually are?