Greg Brockman, cofounder of nonprofit AI research organization OpenAI, had an interest in artificial intelligence from a young age, but didn't come to it right away. Brockman studied computer science at Stanford before transferring to MIT, where he dropped out to launch online payments platform Stripe. As a founding engineer, Brockman helped scale the business from four people to 250. But he had his heart set on another field: artificial general intelligence, or systems that can perform any intellectual task that a human can. Brockman left Stripe to pursue a career in AI, building a knowledge base from the ground up.
A recent Bloomberg article dives into the achievements of Jürgen Schmidhuber. In 1997, Schmidhuber's came up with long short-term memory, or LSTM, a tenet of Artificial General Intelligence (AGI). He states "You can write it down in five lines of code. It can learn to put the important stuff in memory and ignore the unimportant stuff. LSTM can excel at many really important things in today's world, most famously speech recognition and language translation but also image captioning, where you see an image and then you write out words which explain what you see."
The field of artificial intelligence has spawned a vast range of subset fields and terms: machine learning, neural networks, deep learning and cognitive computing, to name but a few. However here we will turn our attention to the specific term'artificial general intelligence', thanks to the Portland-based AI company Kimera Systems' (momentous) claim to have launched the world's first ever example, called Nigel. The AGI Society defines artificial general intelligence as "an emerging field aiming at the building of "thinking machines"; that is general-purpose systems with intelligence comparable to that of the human mind (and perhaps ultimately well beyond human general intelligence)". AGI would, in theory, be able to perform any intellectual feat a human can. You can now perhaps see why a claim to have launched the world's first ever AGI might be a tad ambitious, to say the least.
Its interesting but ultimately to really'under' 'stand' language in a deep way, the systems will need representations based on lower-level (possibly virtual) sensory inputs. That is one of the main enablers for truly general intelligence because its based on this common set of inputs over time, i.e. senses. The domain is sense and motor output and this is a truly general domain. Its also a domain that is connected to the way the concepts map to the real physical world. So when the advanced agent NN systems are put through their paces in virtual 3d worlds by training on simple words, phrases, commands, etc. involving'real-world' demonstrations of the concepts then we will see some next-level understanding.
CEO of AI.io and Moonshot, Terence Mills is an AI pioneer and digital technology specialist. As our society's technological progress marches forward, we've become ever more fascinated with the concept of artificial general intelligence (AGI). From IBM's Jeopardy-playing computer, Watson to television programs like Westworld, we've collectively begun exploring and philosophizing about the potential of AGI. Of course, most discussions about AGI in our popular culture are focused on the future, and not the current realities of the present when it comes to artificial general intelligence. Below, we'll discuss the current realities of AGI and what breakthroughs we're on the cusp of in 2018.