Finding Career Opportunities in AI

@machinelearnbot

Summary: Are there large, sustainable career opportunities in AI and if so where? Do they lie in the current technologies of Deep Learning and Reinforcement Learning or should you focus your career on the next wave of AI? If you're a data scientist thinking about expanding your career options into AI you've got a forest and trees problem. There's a lot going on in deep learning and reinforcement learning but do these areas hold the best future job prospects or do we need to be looking a little further forward? To try to answer that question we'll have to get out of the weeds of current development and get a higher level perspective about where this is all headed.


Finding Career Opportunities in AI

@machinelearnbot

Summary: Are there large, sustainable career opportunities in AI and if so where? Do they lie in the current technologies of Deep Learning and Reinforcement Learning or should you focus your career on the next wave of AI? If you're a data scientist thinking about expanding your career options into AI you've got a forest and trees problem. There's a lot going on in deep learning and reinforcement learning but do these areas hold the best future job prospects or do we need to be looking a little further forward? To try to answer that question we'll have to get out of the weeds of current development and get a higher level perspective about where this is all headed.


The Data Science Behind AI

#artificialintelligence

Summary: For those of you traditional data scientist who are interested in AI but still haven't given it a deep dive, here's a high level overview of the data science technologies that combine into what the popular press calls artificial intelligence (AI). We and others have written quite a bit about the various types of data science that make up AI. Still I hear many folks asking about AI as if it were a single entity. AI is a collection of data science technologies that at this point in development are not even particularly well integrated or even easy to use. In each of these areas however, we've made a lot of progress and that's caught the attention of the popular press.


Frontier AI: How far are we from artificial "general" intelligence, really?

#artificialintelligence

Some call it "strong" AI, others "real" AI, "true" AI or artificial "general" intelligence (AGI)... whatever the term (and important nuances), there are few questions of greater importance than whether we are collectively in the process of developing generalized AI that can truly think like a human -- possibly even at a superhuman intelligence level, with unpredictable, uncontrollable consequences. This has been a recurring theme of science fiction for many decades, but given the dramatic progress of AI over the last few years, the debate has been flaring anew with particular intensity, with an increasingly vocal stream of media and conversations warning us that AGI (of the nefarious kind) is coming, and much sooner than we'd think. Latest example: the new documentary Do you trust this computer?, which streamed last weekend for free courtesy of Elon Musk, and features a number of respected AI experts from both academia and industry. The documentary paints an alarming picture of artificial intelligence, a "new life form" on planet earth that is about to "wrap its tentacles" around us. There is also an accelerating flow of stories pointing to an ever scarier aspects of AI, with reports of alternate reality creation (fake celebrity face generator and deepfakes, with full video generation and speech synthesis being likely in the near future), the ever-so-spooky Boston Dynamics videos (latest one: robots cooperating to open a door) and reports about Google's AI getting "highly aggressive" However, as an investor who spends a lot of time in the "trenches" of AI, I have been experiencing a fair amount of cognitive dissonance on this topic.


Frontier AI: How far are we from artificial "general" intelligence, really?

#artificialintelligence

Some call it "strong" AI, others "real" AI, "true" AI or artificial "general" intelligence (AGI)… whatever the term (and important nuances), there are few questions of greater importance than whether we are collectively in the process of developing generalized AI that can truly think like a human -- possibly even at a superhuman intelligence level, with unpredictable, uncontrollable consequences. This has been a recurring theme of science fiction for many decades, but given the dramatic progress of AI over the last few years, the debate has been flaring anew with particular intensity, with an increasingly vocal stream of media and conversations warning us that AGI (of the nefarious kind) is coming, and much sooner than we'd think. Latest example: the new documentary Do you trust this computer?, which streamed last weekend for free courtesy of Elon Musk, and features a number of respected AI experts from both academia and industry. The documentary paints an alarming picture of artificial intelligence, a "new life form" on planet earth that is about to "wrap its tentacles" around us. There is also an accelerating flow of stories pointing to an ever scarier aspects of AI, with reports of alternate reality creation (fake celebrity face generator and deepfakes, with full video generation and speech synthesis being likely in the near future), the ever-so-spooky Boston Dynamics videos (latest one: robots cooperating to open a door) and reports about Google's AI getting "highly aggressive" However, as an investor who spends a lot of time in the "trenches" of AI, I have been experiencing a fair amount of cognitive dissonance on this topic.