This workshop brought together 20 computer scientists, psychologists, and human-computer interaction (HCI) researchers to exchange results and views on human error and judgment bias. Human error is typically studied when operators undertake actions, but judgment bias is an issue in thinking rather than acting. Both topics are generally ignored by the HCI community, which is interested in designs that eliminate human error and bias tendencies. As a result, almost no one at the workshop had met before, and the discussion for most participants was novel and lively.
This article is part of an MIT SMR initiative exploring how technology is reshaping the practice of management. Four lessons from IoT early adopters: To paraphrase the late Roy Scheider in one of the greatest of all summer movies, you're gonna need a bigger router. In 2025, Machina Research predicts, the Internet of Things is going to be a 3 trillion market of 27 billion devices generating more than 2 zettabytes of data. Two zettabytes of data is something like twice the total global IP traffic we'll generate this year, according to Cisco. The IoT data deluge is, by the way, the first of four lessons drawn from early IoT adopters by contributing writer Howard Baldwin for his article in Computerworld.
To understand how advances in artificial intelligence are likely to change the workplace -- and the work of managers -- you need to know where AI delivers the most value. Major technology companies such as Apple, Google, and Amazon are prominently featuring artificial intelligence (AI) in their product launches and acquiring AI-based startups. The flurry of interest in AI is triggering a variety of reactions -- everything from excitement about how the capabilities will augment human labor to trepidation about how they will eliminate jobs. In our view, the best way to assess the impact of radical technological change is to ask a fundamental question: How does the technology reduce costs? Only then can we really figure out how things might change. To appreciate how useful this framing can be, let's review the rise of computer technology through the same lens.
What will it be like when machines make and execute decisions without any human intervention? Why would we make such systems and what are their implications for the future of human judgment and free will? Hundreds, if not thousands, of science fiction stories tell us it's a bad idea to build automated systems without "Human-in-the-loop" (HITL) processes for keeping them in check. In real life, the need for human intervention before executing an automated process is most obvious when it has serious, irreversible consequences: like killing a person with a drone. In high stakes situations like drone strikes, humans make the difficult judgment call before the weapon's deadly automation kicks in.
Since the 1950s, researchers have documented the many types of predictions in which algorithms outperform humans. Algorithms beat doctors and pathologists in predicting the survival of cancer patients, occurrence of heart attacks, and severity of diseases. Algorithms predict recidivism of parolees better than parole boards. And they predict whether a business will go bankrupt better than loan officers. According to anecdotes in a classic book on the accuracy of algorithms, many of these earliest findings were met with skepticism.