Goto

Collaborating Authors

Cognitive collaboration

#artificialintelligence

Although artificial intelligence (AI) has experienced a number of "springs" and "winters" in its roughly 60-year history, it is safe to expect the current AI spring to be both lasting and fertile. Applications that seemed like science fiction a decade ago are becoming science fact at a pace that has surprised even many experts. The stage for the current AI revival was set in 2011 with the televised triumph of the IBM Watson computer system over former Jeopardy! This watershed moment has been followed rapid-fire by a sequence of striking breakthroughs, many involving the machine learning technique known as deep learning. Computer algorithms now beat humans at games of skill, master video games with no prior instruction, 3D-print original paintings in the style of Rembrandt, grade student papers, cook meals, vacuum floors, and drive cars.1 All of this has created considerable uncertainty about our future relationship with machines, the prospect of technological unemployment, and even the very fate of humanity. Regarding the latter topic, Elon Musk has described AI "our biggest existential threat." Stephen Hawking warned that "The development of full artificial intelligence could spell the end of the human race." In his widely discussed book Superintelligence, the philosopher Nick Bostrom discusses the possibility of a kind of technological "singularity" at which point the general cognitive abilities of computers exceed those of humans.2 Discussions of these issues are often muddied by the tacit assumption that, because computers outperform humans at various circumscribed tasks, they will soon be able to "outthink" us more generally. Continual rapid growth in computing power and AI breakthroughs notwithstanding, this premise is far from obvious.



Minds and machines: The art of forecasting in the age of artificial intelligence

#artificialintelligence

Two of today's major business and intellectual trends offer complementary insights about the challenge of making forecasts in a complex and rapidly changing world. Forty years of behavioral science research into the psychology of probabilistic reasoning have revealed the surprising extent to which people routinely base judgments and forecasts on systematically biased mental heuristics rather than careful assessments of evidence. These findings have fundamental implications for decision making, ranging from the quotidian (scouting baseball players and underwriting insurance contracts) to the strategic (estimating the time, expense, and likely success of a project or business initiative) to the existential (estimating security and terrorism risks). The bottom line: Unaided judgment is an unreliable guide to action. Consider psychologist Philip Tetlock's celebrated multiyear study concluding that even top journalists, historians, and political experts do little better than random chance at forecasting such political events as revolutions and regime changes.1 The second trend is the increasing ubiquity of data-driven decision making and artificial intelligence applications. Once again, an important lesson comes from behavioral science: A body of research dating back to the 1950s has established that even simple predictive models outperform human experts' ability to make predictions and forecasts. This implies that judiciously constructed predictive models can augment human intelligence by helping humans avoid common cognitive traps.


Realizing the full potential of AI in the workplace

#artificialintelligence

Artificial intelligence (AI) is one of the signature issues of our time, but also one of the most easily misinterpreted. The prominent computer scientist Andrew Ng's slogan "AI is the new electricity"2 signals that AI is likely to be an economic blockbuster--a general-purpose technology3 with the potential to reshape business and societal landscapes alike. Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don't think AI will transform in the next several years.4 Such provocative statements naturally prompt the question: How will AI technologies change the role of humans in the workplaces of the future? An implicit assumption shaping many discussions of this topic might be called the "substitution" view: namely, that AI and other technologies will perform a continually expanding set of tasks better and more cheaply than humans, while humans will remain employed to perform those tasks at which machines ...


The art of forecasting in the age of artificial intelligence

#artificialintelligence

Two of today's major business and intellectual trends offer complementary insights about the challenge of making forecasts in a complex and rapidly changing world. Forty years of behavioral science research into the psychology of probabilistic reasoning have revealed the surprising extent to which people routinely base judgments and forecasts on systematically biased mental heuristics rather than careful assessments of evidence. These findings have fundamental implications for decision making, ranging from the quotidian (scouting baseball players and underwriting insurance contracts) to the strategic (estimating the time, expense, and likely success of a project or business initiative) to the existential (estimating security and terrorism risks). The bottom line: Unaided judgment is an unreliable guide to action. Consider psychologist Philip Tetlock's celebrated multiyear study concluding that even top journalists, historians, and political experts do little better than random chance at forecasting such political events as revolutions and regime changes.1 The second trend is the increasing ubiquity of data-driven decision making and artificial intelligence applications. Once again, an important lesson comes from behavioral science: A body of research dating back to the 1950s has established that even simple predictive models outperform human experts' ability to make predictions and forecasts. This implies that judiciously constructed predictive models can augment human intelligence by helping humans avoid common cognitive traps.