Frontier AI: How far are we from artificial "general" intelligence, really?

#artificialintelligence 

Some call it "strong" AI, others "real" AI, "true" AI or artificial "general" intelligence (AGI)... whatever the term (and important nuances), there are few questions of greater importance than whether we are collectively in the process of developing generalized AI that can truly think like a human -- possibly even at a superhuman intelligence level, with unpredictable, uncontrollable consequences. This has been a recurring theme of science fiction for many decades, but given the dramatic progress of AI over the last few years, the debate has been flaring anew with particular intensity, with an increasingly vocal stream of media and conversations warning us that AGI (of the nefarious kind) is coming, and much sooner than we'd think. Latest example: the new documentary Do you trust this computer?, which streamed last weekend for free courtesy of Elon Musk, and features a number of respected AI experts from both academia and industry. The documentary paints an alarming picture of artificial intelligence, a "new life form" on planet earth that is about to "wrap its tentacles" around us. There is also an accelerating flow of stories pointing to an ever scarier aspects of AI, with reports of alternate reality creation (fake celebrity face generator and deepfakes, with full video generation and speech synthesis being likely in the near future), the ever-so-spooky Boston Dynamics videos (latest one: robots cooperating to open a door) and reports about Google's AI getting "highly aggressive" However, as an investor who spends a lot of time in the "trenches" of AI, I have been experiencing a fair amount of cognitive dissonance on this topic.