Goto

Collaborating Authors

How far are we from artificial general intelligence?

#artificialintelligence

Gary Marcus, founder and CEO of Robust.ai, a company based in Palo Alto, Calif., that is trying to build a cognitive platform for a range of bots, is a proponent of AGI having to work more like a human mind. Speaking at the MIT Technology Review's virtual EmTech Digital conference, he said today's deep learning algorithms lack the ability to contextualize and generalize information, which are some of the biggest advantages to human-like thinking. Marcus said he doesn't specifically think machines need to replicate the human brain, neuron for neuron. But there are some aspects of human thought, like using symbolic representation of information to extrapolate knowledge to a broader set of problems, that would help achieve more general intelligence. "[Deep learning] doesn't work for reasoning or language understanding, which we desperately need right now," Marcus said.



An example of what's holding back general intelligence research • /r/artificial

#artificialintelligence

Its interesting but ultimately to really'under' 'stand' language in a deep way, the systems will need representations based on lower-level (possibly virtual) sensory inputs. That is one of the main enablers for truly general intelligence because its based on this common set of inputs over time, i.e. senses. The domain is sense and motor output and this is a truly general domain. Its also a domain that is connected to the way the concepts map to the real physical world. So when the advanced agent NN systems are put through their paces in virtual 3d worlds by training on simple words, phrases, commands, etc. involving'real-world' demonstrations of the concepts then we will see some next-level understanding.



What will it take to build a conscious A.I. brain?

#artificialintelligence

Joscha Bach: If you look at our current technological systems they are obviously nowhere near where our minds are. And one of the biggest questions for me is: What's the difference between where we are now and where we need to be if we want to build minds--If we want to build systems that are generally intelligent and self-motivated and maybe self-aware? And, of course, the answer to this is'we don't know' because if we knew we'd have already done it. But there are basically several perspectives on this. One is our minds as general learning systems that are able to model arbitrary things, including themselves, and if there are this, they probably need a very distinct set of motivations, needs; things that they want to do.