AI today and tomorrow is mostly about curve fitting, not intelligence

#artificialintelligence

As debates around AI's value continue, the risk of an AI winter is real. We need to level set what is real and what is imagined so that the next press release you see describing some amazing breakthrough is properly contextualized. Unquestionably, the latest spike of interest in AI technology using machine learning and the neuron-inspired deep learning is behind incredible advancements in many software categories. Achievements such as language translation, image and scene recognition and conversational UIs that were once the stuff of sci-fi dreams are now a reality. Even as software using AI-labeled techniques continues to yield tremendous improvements in most software categories, both academics and skeptical observers have observed that such algorithms fall far short of what can be reasonably considered intelligent.


All The Hype Is About AI, But The Real Action Is In IA

#artificialintelligence

The Artificial Intelligence ("AI") vs Intelligence Augmentation ("IA") debate has been around for over half a century. IA or Intelligence Augmentation classically refers to the effective use of information technology in augmenting human capabilities, and the idea has been around since the 1950s. AI is increasingly being used today to broadly describe machines that can mimic human functions such as learning and problem solving, but was originally founded on the premise that human intelligence can be precisely described, and machines made to simulate it. The term Artificial General Intelligence (AGI) is often used to represent only the latter, stricter definition. There is unprecedented hype today around AI, its incredible recent growth trajectory, myriad potential applications, and its potential emergent threats to society. The broader definition of AI creates confusion, especially for those that may not be closely following the technology.


All The Hype Is About AI, But The Real Action Is In IA

#artificialintelligence

The following is a guest post by Anupam Rastogi (@anupamr). Rastogi is a growth stage technology investor at NGP, focused on the intersection of IoT, data, and machine learning in the enterprise. The Artificial Intelligence ("AI") vs Intelligence Augmentation ("IA") debate has been around for over half a century. IA or Intelligence Augmentation classically refers to the effective use of information technology in augmenting human capabilities, and the idea has been around since the 1950s. AI is increasingly being used today to broadly describe machines that can mimic human functions such as learning and problem solving, but was originally founded on the premise that human intelligence can be precisely described, and machines made to simulate it.


The Silent Rockstar of BigData: Machine Learning

#artificialintelligence

Too much data and too few people: Firstly, this is a no surprise that machine learning algorithms will work at the pace not matching their counter scientist friends. If trained properly, machine could easily pacify majority of data preparation and analysis demand in data analytics world. Another cool thing about machine learning is that once code is prepped and machine is programmed, you could use it multiple times and multiple places and see the magic happen. The trick is to not overkill first but to use it for overhead tasks first and keep making it more and more sophisticated, so that it will start doing all the heavy lifting and pacifying the resource demand as a result. Hence, machine learning single handedly can reduce big-data resource crunch and make the resource distribution relevant and appropriately.


What most people don't understand about AI and the the state of machine learning

#artificialintelligence

Though the term was officially coined in the 1950s, Artificial Intelligence (AI) is a concept that dates back to ancient Egyptian automatons and early myths of Greek robots. Notable attempts to define AI include the 1956 Dartmouth conference and the Turing test, and passionate AI advocates persist to explain the concept to the world in a way that is distinguishable and digestible.