OpenAI cofounder Greg Brockman on the transformative potential of artificial general intelligence


Greg Brockman, cofounder of nonprofit AI research organization OpenAI, had an interest in artificial intelligence from a young age, but didn't come to it right away. Brockman studied computer science at Stanford before transferring to MIT, where he dropped out to launch online payments platform Stripe. As a founding engineer, Brockman helped scale the business from four people to 250. But he had his heart set on another field: artificial general intelligence, or systems that can perform any intellectual task that a human can. Brockman left Stripe to pursue a career in AI, building a knowledge base from the ground up.

What is Artificial General Intelligence? And has Kimera Systems made a breakthrough?


The field of artificial intelligence has spawned a vast range of subset fields and terms: machine learning, neural networks, deep learning and cognitive computing, to name but a few. However here we will turn our attention to the specific term'artificial general intelligence', thanks to the Portland-based AI company Kimera Systems' (momentous) claim to have launched the world's first ever example, called Nigel. The AGI Society defines artificial general intelligence as "an emerging field aiming at the building of "thinking machines"; that is general-purpose systems with intelligence comparable to that of the human mind (and perhaps ultimately well beyond human general intelligence)". AGI would, in theory, be able to perform any intellectual feat a human can. You can now perhaps see why a claim to have launched the world's first ever AGI might be a tad ambitious, to say the least.

Jürgen Schmidhuber, Father of Artificial General Intelligence #Robots #AI #AGI


A recent Bloomberg article dives into the achievements of Jürgen Schmidhuber. In 1997, Schmidhuber's came up with long short-term memory, or LSTM, a tenet of Artificial General Intelligence (AGI). He states "You can write it down in five lines of code. It can learn to put the important stuff in memory and ignore the unimportant stuff. LSTM can excel at many really important things in today's world, most famously speech recognition and language translation but also image captioning, where you see an image and then you write out words which explain what you see."

What will it take to build a conscious A.I. brain?


Joscha Bach: If you look at our current technological systems they are obviously nowhere near where our minds are. And one of the biggest questions for me is: What's the difference between where we are now and where we need to be if we want to build minds--If we want to build systems that are generally intelligent and self-motivated and maybe self-aware? And, of course, the answer to this is'we don't know' because if we knew we'd have already done it. But there are basically several perspectives on this. One is our minds as general learning systems that are able to model arbitrary things, including themselves, and if there are this, they probably need a very distinct set of motivations, needs; things that they want to do.

Which would be your general purpose machine learning library of choice (python)? • /r/MachineLearning


If you know you would have to perform machine learning, both for classification and clustering, and didn't want to lose too much time mastering libraries, which library would you pick? I'm asking this because I need to perform some machine learning for research, but the research isn't focused on the machine learning itself, so I don't want to "waste" too much time on that. I already have some background on the theory behind machine learning and have used it in R before.