Goto

Collaborating Authors

The Intelligence Enigma: Balancing the Power Between Humans and Machines

#artificialintelligence

Empowering the human is a piece of the puzzle often missing from the fast-paced tech world but remains one of the most important drivers of success and true disruption. Think about the people behind the companies creating or using the most innovative technologies--even the biggest businesses rely on human creativity and emotional intelligence as much as they rely on technological development to survive, let alone thrive in the digital age. These are all digital advancements that are discussed in the context of technology and the sheer computational power of the machine. But what many business leaders fail to understand is that machines can't solve problems alone. Machines are the enabler, but without situational context and logic, these technologies can never serve as a replacement for humans.


Cognitive collaboration

#artificialintelligence

Although artificial intelligence (AI) has experienced a number of "springs" and "winters" in its roughly 60-year history, it is safe to expect the current AI spring to be both lasting and fertile. Applications that seemed like science fiction a decade ago are becoming science fact at a pace that has surprised even many experts. The stage for the current AI revival was set in 2011 with the televised triumph of the IBM Watson computer system over former Jeopardy! This watershed moment has been followed rapid-fire by a sequence of striking breakthroughs, many involving the machine learning technique known as deep learning. Computer algorithms now beat humans at games of skill, master video games with no prior instruction, 3D-print original paintings in the style of Rembrandt, grade student papers, cook meals, vacuum floors, and drive cars.1 All of this has created considerable uncertainty about our future relationship with machines, the prospect of technological unemployment, and even the very fate of humanity. Regarding the latter topic, Elon Musk has described AI "our biggest existential threat." Stephen Hawking warned that "The development of full artificial intelligence could spell the end of the human race." In his widely discussed book Superintelligence, the philosopher Nick Bostrom discusses the possibility of a kind of technological "singularity" at which point the general cognitive abilities of computers exceed those of humans.2 Discussions of these issues are often muddied by the tacit assumption that, because computers outperform humans at various circumscribed tasks, they will soon be able to "outthink" us more generally. Continual rapid growth in computing power and AI breakthroughs notwithstanding, this premise is far from obvious.


Cognitive collaboration

#artificialintelligence

Although artificial intelligence (AI) has experienced a number of "springs" and "winters" in its roughly 60-year history, it is safe to expect the current AI spring to be both lasting and fertile. Applications that seemed like science fiction a decade ago are becoming science fact at a pace that has surprised even many experts. The stage for the current AI revival was set in 2011 with the televised triumph of the IBM Watson computer system over former Jeopardy! This watershed moment has been followed rapid-fire by a sequence of striking breakthroughs, many involving the machine learning technique known as deep learning. Computer algorithms now beat humans at games of skill, master video games with no prior instruction, 3D-print original paintings in the style of Rembrandt, grade student papers, cook meals, vacuum floors, and drive cars.1 All of this has created considerable uncertainty about our future relationship with machines, the prospect of technological unemployment, and even the very fate of humanity. Regarding the latter topic, Elon Musk has described AI "our biggest existential threat." Stephen Hawking warned that "The development of full artificial intelligence could spell the end of the human race." In his widely discussed book Superintelligence, the philosopher Nick Bostrom discusses the possibility of a kind of technological "singularity" at which point the general cognitive abilities of computers exceed those of humans.2 Discussions of these issues are often muddied by the tacit assumption that, because computers outperform humans at various circumscribed tasks, they will soon be able to "outthink" us more generally. Continual rapid growth in computing power and AI breakthroughs notwithstanding, this premise is far from obvious.


Learning to trust artificial intelligence systems accountability, com…

#artificialintelligence

They generate not just answers to numerical problems, but hypotheses, reasoned arguments and recommendations about more complex -- and meaningful -- bodies of data. What's more, cognitive systems can make sense of the 80 percent of the world's data that computer scientists call "unstructured." This enables them to keep pace with the volume, complexity and unpredictability of information and systems in the modern world. None of this involves either sentience or autonomy on the part of machines. Rather, it consists of augmenting the human ability to understand -- and act upon -- the complex systems of our society. This augmented intelligence is the necessary next step in our ability to harness technology in the pursuit of knowledge, to further our expertise and to improve the human condition. That is why it represents not just a new technology, but the dawn of a new era of technology, business and society: the Cognitive Era. The success of cognitive computing will not be measured by Turing tests or a computer's ability to mimic humans. It will be measured in more practical ways, like return on investment, new market opportunities, diseases cured and lives saved. It's not surprising that the public's imagination has been ignited by Artificial Intelligence since the term was first coined in 1955. In the ensuing 60 years, we have been alternately captivated by its promise, wary of its potential for abuse and frustrated by its slow development. But like so many advanced technologies that were conceived before their time, Artificial Intelligence has come to be widely misunderstood --co-opted by Hollywood, mischaracterized by the media, portrayed as everything from savior to scourge of humanity. Those of us engaged in serious information science and in its application in the real world of business and society understand the enormous potential of intelligent systems. The future of such technology -- which we believe will be cognitive, not "artificial"-- has very different characteristics from those generally attributed to AI, spawning different kinds of technological, scientific and societal challenges and opportunities, with different requirements for governance, policy and management. Cognitive computing refers to systems that learn at scale, reason with purpose and interact with humans naturally. Rather than being explicitly programmed, they learn and reason from their interactions with us and from their experiences with their environment. They are made possible by advances in a number of scientific fields over the past half-century, and are different in important ways from the information systems that preceded them. Here at IBM, we have been working on the foundations of cognitive computing technology for decades, combining more than a dozen disciplines of advanced computer science with 100 years of business expertise.


Untold History of AI: The DARPA Dreamer Who Aimed for Cyborg Intelligence

IEEE Spectrum Robotics

The history of AI is often told as the story of machines getting smarter over time. What's lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies. In this six-part series, we explore that human history of AI--how innovators, thinkers, workers, and sometimes hucksters have created algorithms that can replicate human thought and behavior (or at least appear to). While it can be exciting to be swept up by the idea of super-intelligent computers that have no need for human input, the true history of smart machines shows that our AI is only as good as we are. At 10:30pm on 29 October 1969, a graduate student at UCLA sent a two-letter message from an SDS Sigma 7 computer to another machine a few hundred miles away at the Stanford Research Institute in Menlo Park.