Goto

Collaborating Authors

artificial general intelligence


How Close We Are to Fully Self-Sufficient Artificial Intelligence

#artificialintelligence

If you followed the world of pop-culture or tech for some time now, then you know that advances in artificial intelligence are heating up. In reality, AI has been the talk of mainstream pop-culture and sci-fi since the first Terminator movie came out in 1984. These movies present an example of something called "Artificial General Intelligence." So how close are we to that? No, not how close are we to when the terminators take over, but how close are we to having an AI capable of navigating nearly any problem it's presented with.


What Is Narrow AI & How It Is Different From Artificial General Intelligence

#artificialintelligence

When Alan Turing first thought of coming up with machines that could think like humans, he was probably thinking about machines that could one day make the life of human beings easier. Fast forward 70 years, and AI has been able to perform tasks that have undoubtedly made life more comfortable. Conversational AI, flying drones, bots, language translation, facial recognition, etc., are some of the most promising AI applications we have today. But these fall under Narrow AI rather than the Artificial General Intelligence, which is something different. As the definition goes, narrow AI is a specific type of artificial intelligence in which technology outperforms humans in a narrowly defined task.


Can Artificial Intelligence understand emotions? - Think Big

#artificialintelligence

When John McCarthy and Marvin Minsky founded Artificial Intelligence in 1956, they were amazed how a machine could perform incredibly difficult puzzles quicker than humans. However, it turns out that teaching Artificial Intelligence to win a chess match is actually quite easy. What would present challenges would be teaching a machine what emotions are and how to replicate them. "We have now accepted after 60 years of AI that the things we originally thought were easy, are actually very hard and what we thought was hard, like playing chess, is very easy" Social and emotional intelligence come almost automatically to humans; we react on instinct. Whilst some of us are more perceptive than others, we can easily interpret the emotions and feelings of those around us.


How far are we from artificial general intelligence?

#artificialintelligence

Since the earliest days of artificial intelligence -- and computing more generally -- theorists have assumed that intelligent machines would think in much the same ways as humans. After all, we know of no greater cognitive power than the human brain. In many ways, it makes sense to try to replicate it if the goal is to create a high level of cognitive processing. However, there is a debate today over the best way of reaching true general AI. In particular, recent years' advancements in deep learning -- which is itself inspired by the human brain, though diverges from it in some important ways -- have shown developers that there may be other paths.


Everything you need to know about narrow AI

#artificialintelligence

In 1956, a group of scientists led by John McCarthy, a young assistant-professor of mathematics, gathered at the Dartmouth College, NH, for an ambitious six-week project: Creating computers that could "use language, form abstractions, and concepts, solve kinds of problems now reserved for humans, and improve themselves." The project kickstarted the field that has become known as artificial intelligence (AI). At the time, the scientists thought that a "2-month, 10-man study of artificial intelligence" would solve the biggest part of the AI equation. "We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer," the first AI proposal read. We still don't have thinking machines that can think and solve problems like a human child, let alone an adult.


How far are we from artificial general intelligence?

#artificialintelligence

Gary Marcus, founder and CEO of Robust.ai, a company based in Palo Alto, Calif., that is trying to build a cognitive platform for a range of bots, is a proponent of AGI having to work more like a human mind. Speaking at the MIT Technology Review's virtual EmTech Digital conference, he said today's deep learning algorithms lack the ability to contextualize and generalize information, which are some of the biggest advantages to human-like thinking. Marcus said he doesn't specifically think machines need to replicate the human brain, neuron for neuron. But there are some aspects of human thought, like using symbolic representation of information to extrapolate knowledge to a broader set of problems, that would help achieve more general intelligence. "[Deep learning] doesn't work for reasoning or language understanding, which we desperately need right now," Marcus said.


What is artificial narrow intelligence (ANI)?

#artificialintelligence

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. In 1956, a group of scientists led by John McCarthy, a young assistant-professor of mathematics, gathered at the Dartmouth College, NH, for an ambitious six-week project: Creating computers that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." The project kickstarted the field that has become known as artificial intelligence (AI). At the time, the scientists thought that a "2-month, 10-man study of artificial intelligence" would solve the biggest part of the AI equation. "We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer," the first AI proposal read.


Can AI Find Bugs in Your Code? - The New Stack

#artificialintelligence

You'd be forgiven for believing that AI debugging already exists, because so many companies are claiming to have artificial intelligence powering their monitoring or observability products. In this, those monitoring companies are no different from the thousands of technology vendors making somewhat dubious claims about AI. All that said, there genuinely has been an explosion of something that feels a bit more like AI; underlying technologies like deep learning are driving innovation in computer vision, object identification, natural language processing, voice recognition and artificial photogeneration. These capabilities power healthcare solutions that can spot cancers or bleeds in patient scans, autonomous vehicles, virtual assistants and chatbots. But can they be used to find and fix problems in code?


Talking Digital Future: Artificial Intelligence Cointelegraph

#artificialintelligence

I chose artificial intelligence as my next topic, as it can be considered as one of the most known technologies, and people imagine it when they talk about the future. But the right question would be: What is artificial intelligence? Artificial intelligence is not something that just happened in 2015 and 2016. It's been around for a hundred years as an idea, but as a science, we started seeing developments from the 1950s. So, this is quite an old tech topic already, but because of the kinds of technology that we have access to today -- specifically, processing performance and storage -- we're starting to see significant leaps in AI development. When I started the course entitled, "Foundations of the Fourth Industrial Revolution (Industry 4.0)," I got deeper into the topic of artificial intelligence. One of the differences between the third industrial revolution -- defined by the microchip and digitization -- and the fourth industrial revolution is the scope, velocity and breakthroughs in medicine and biology, as well as widespread use of artificial intelligence across our society. Thus, AI is not only a product of Industry 4.0 but also an impetus as to why the fourth industrial revolution is currently happening and will continue to do so. I think there are two ways to understand AI: the first way is to try giving a quick definition of what it is, but the second is to also think about what it is not.


Talking Digital Future: Artificial Intelligence

#artificialintelligence

Quantum computing could potentially break much of the encryption algorithms and protocols that currently secure the internet and computational industry as they are. I chose artificial intelligence as my next topic, as it can be considered as one of the most known technologies, and people imagine it when they talk about the future. But the right question would be: What is artificial intelligence? Artificial intelligence is not something that just happened in 2015 and 2016. It's been around for a hundred years as an idea, but as a science, we started seeing developments from the 1950s. So, this is quite an old tech topic already, but because of the kinds of technology that we have access to today -- specifically, processing performance and storage -- we're starting to see significant leaps in AI development. When I started the course entitled, "Foundations of the Fourth Industrial Revolution (Industry 4.0)," I got deeper into the topic of artificial intelligence. One of the differences between the third industrial revolution -- defined by the microchip and digitization -- and the fourth industrial revolution is the scope, velocity and breakthroughs in medicine and biology, as well as widespread use of artificial intelligence across our society. Thus, AI is not only a product of Industry 4.0 but also an impetus as to why the fourth industrial revolution is currently happening and will continue to do so.