Greg Brockman, cofounder of nonprofit AI research organization OpenAI, had an interest in artificial intelligence from a young age, but didn't come to it right away. Brockman studied computer science at Stanford before transferring to MIT, where he dropped out to launch online payments platform Stripe. As a founding engineer, Brockman helped scale the business from four people to 250. But he had his heart set on another field: artificial general intelligence, or systems that can perform any intellectual task that a human can. Brockman left Stripe to pursue a career in AI, building a knowledge base from the ground up.
A recent Bloomberg article dives into the achievements of Jürgen Schmidhuber. In 1997, Schmidhuber's came up with long short-term memory, or LSTM, a tenet of Artificial General Intelligence (AGI). He states "You can write it down in five lines of code. It can learn to put the important stuff in memory and ignore the unimportant stuff. LSTM can excel at many really important things in today's world, most famously speech recognition and language translation but also image captioning, where you see an image and then you write out words which explain what you see."
The field of artificial intelligence has spawned a vast range of subset fields and terms: machine learning, neural networks, deep learning and cognitive computing, to name but a few. However here we will turn our attention to the specific term'artificial general intelligence', thanks to the Portland-based AI company Kimera Systems' (momentous) claim to have launched the world's first ever example, called Nigel. The AGI Society defines artificial general intelligence as "an emerging field aiming at the building of "thinking machines"; that is general-purpose systems with intelligence comparable to that of the human mind (and perhaps ultimately well beyond human general intelligence)". AGI would, in theory, be able to perform any intellectual feat a human can. You can now perhaps see why a claim to have launched the world's first ever AGI might be a tad ambitious, to say the least.
If you know you would have to perform machine learning, both for classification and clustering, and didn't want to lose too much time mastering libraries, which library would you pick? I'm asking this because I need to perform some machine learning for research, but the research isn't focused on the machine learning itself, so I don't want to "waste" too much time on that. I already have some background on the theory behind machine learning and have used it in R before.