If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The scope of Artificial Intelligence is much broader, including technologies like Virtual Agents, Natural Language Processing, Machine Learning Platforms and many other. The main focus in GE is on making machines smarter, leveraging machine learning to create "digital twins" – a digital replica, or data-based representation of an industrial machine. Unfortunately, SalesForce's Connected Small Business Report notes that only 21% of small businesses are currently using business intelligence and analytics. World's top technology leaders Stephen Hawking and Elon Musk are on the sceptical side of this debate, while Microsoft, Apple, Google and many others are already eagerly taking advantage of the AI technology.
On the May 8th edition of Closing Bell on CNBC, venture capitalist Chamath Palihapitiya, founder and CEO of Social Capital, created quite a stir in enterprise artificial intelligence (AI) circles, when he took on Watson, Big Blue's AI platform. "Human intelligence outperforms machine-learning applications in complex decision making routinely required during the course of care, because machines do not yet possess mature capabilities for perceiving, reasoning, or explaining," explained Ernest Sohn, a chief data scientist in Booz Allen's Data Solutions and Machine Intelligence group; Joachim Roski, a principal at Booz Allen Hamilton; Steven Escaravage, vice president in Booz Allen's Strategic Innovation Group; and Kevin Maloy, MD, assistant professor of emergency medicine at Georgetown University School of Medicine. "A health care organization that relies on a single EHR [Electronic Health Record] vendor's analytic solutions, as well as its own legacy analytics infrastructure created before the era of big data, may see limited progress," they continued. "While many machine-learning solutions are not yet mature and sophisticated enough to support complex clinical decisions, machine learning can be effectively deployed today to reduce more routine, time-consuming, and resource-intensive tasks, allowing freed-up personnel to be redeployed to support higher-end work."
By the mid-1950s, the world realized that computers were going to play a major role in future technology. Military, business and educational entities began investing heavily in computers, and rapidly advancing hardware meant that the potential for computing seemed endless. Artificial intelligence, perhaps more than any other aspect of computing, captured the public's imagination, and predictions of a future ruled by computation and robots were common in news stories and throughout science fiction literature and cinema. To understand why early experts were so optimistic about artificial intelligence, it's important to understand Moore's Law. Computers developed rapidly through the 1950s and early 1960s, and Gordon Moore, a co-founder of computing giants Fairfield Semiconductor and Intel, predicted that the number of transistors in a given area on a circuit board would double every year, leading to exponential growth in processing power.
I first read Ray Kurzweil's book, The Age of Spiritual Machines, in 2006, a few years after I dropped out of Bible school and stopped believing in God. I was living alone in Chicago's southern industrial sector and working nights as a cocktail waitress. Beyond the people I worked with, I spoke to almost no one. I clocked out at three each morning, went to after-hours bars, and came home on the first train of the morning, my head pressed against the window so as to avoid the spectre of my reflection appearing and disappearing in the blackened glass. At Bible school, I had studied a branch of theology that divided all of history into successive stages by which God revealed his truth.
What distinguishes Elon Musk's reputation as an entrepreneur is that any venture he takes on comes from a bold and inspiring vision for the future of our species. Not long ago, Musk announced a new company, Neuralink, with the goal of merging the human mind with AI. Given Musk's track record of accomplishing the seemingly impossible, the world is bound to pay extra attention when he says he wants to connect our brains to computers. Neuralink is registered as a medical company in California. With further details yet to be announced, it will attempt to create a "neural lace," which is a brain-machine interface that can be implanted directly into our brains to monitor and enhance them.
James Bedsol interviewed Ray Kurzweil, one of the world's leading minds on artificial intelligence, technology and futurism, in his Google office in Mountain View, CA, February 15, 2017. Kurzweil is one of the world's leading minds on artificial intelligence, technology and futurism. He is the author of five national best-selling books, including "The Singularity is Near" and "How to Create a Mind." Raymond "Ray" Kurzweil is an American author, computer scientist, inventor and futurist. Aside from futurology, he is involved in fields such as optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments.
The end of the world as we know it is near. And that's a good thing, according to many of the futurists who are predicting the imminent arrival of what's been called the technological singularity. The technological singularity is the idea that technological progress, particularly in artificial intelligence, will reach a tipping point to where machines are exponentially smarter than humans. It has been a hot topic of late. Well-known futurist and Google engineer Ray Kurzweil (co-founder and chancellor of Singularity University) reiterated his bold prediction at Austin's South by Southwest (SXSW) festival this month that machines will match human intelligence by 2029 (and has said previously the Singularity itself will occur by 2045).
A few weeks ago, for the first time ever, a computer beat the world champion of Go, one of the most complex games known to man. This was another watershed moment in the progress of artificial intelligence. To give you an idea how complex Go is, there are 2.082 10 170 possible board configurations. That is 2 with 170 zeroes after it. Chances are your brain cannot even conceive of a number that large (but a computer can).
When Ray Kurzweil met with Google CEO Larry Page last July, he wasn't looking for a job. A respected inventor who's become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own. It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. "I could try to give you some access to it," Page told Kurzweil.
There are only a few industries in which automation isn't threatening some job roles. That's a pretty scary thought, right? Well, don't panic just yet. "While automation will eliminate very few occupations entirely in the next decade, it will affect portions of almost all jobs to a greater or lesser degree, depending on the type of work they entail," according to McKinsey Quarterly. Roles that require empathy, like therapists and psychologists, as well as jobs that are highly reliant on social and negotiation skills, like managerial positions, are less threatened by automation, according to The Future of Employment: How Susceptible Are Jobs to Computerisation?