What will it be like when machines make and execute decisions without any human intervention? Why would we make such systems and what are their implications for the future of human judgment and free will? Hundreds, if not thousands, of science fiction stories tell us it's a bad idea to build automated systems without "Human-in-the-loop" (HITL) processes for keeping them in check. In real life, the need for human intervention before executing an automated process is most obvious when it has serious, irreversible consequences: like killing a person with a drone. In high stakes situations like drone strikes, humans make the difficult judgment call before the weapon's deadly automation kicks in.
Political debates often suffer from vague-verbiage predictions that make it difficult to assess accuracy and improve policy. A tournament sponsored by the U.S. intelligence community revealed ways in which forecasters can better use probability estimates to make predictions--even for seemingly "unique" events--and showed that tournaments are a useful tool for generating knowledge. Drawing on the literature about the effects of accountability, the authors suggest that tournaments may hold even greater potential as tools for depolarizing political debates and resolving policy disputes.
Three women who sued their former taekwondo instructor for sexually abusing them while they were minors each have been awarded $20 million in damages by a California court. The $60-million default judgment against Marc Scott Gitelman was awarded last week by a Los Angeles Superior Court judge after Gitelman failed to respond to an amended complaint stemming from a 2015 civil lawsuit filed by the plaintiffs. The lawsuit alleged that Gitelman molested the women from 2007 until his arrest on sexual assault charges in August 2014. According to the lawsuit, on multiple occasions Gitelman invited the young athletes to his hotel room to watch videos of their previous taekwondo matches before he sexually abused them. Gitelman was sentenced in October 2015 to more than four years in state prison after a Pomona jury convicted him of multiple felony counts, including oral copulation of a minor, unlawful sexual intercourse and lewd acts upon a child.
Court is now in session, and author Robert J. Sawyer makes the case for leveraging AI to improve ethics and fairness in civil society. With 23 novels under his belt, as well as scores of short stories, scripts, treatments and more, Hugo and Nebula Award-winning author Robert J. Sawyer is not shy about exploring the technological and cultural landscape of our future. Among the many works in his remarkable and widely regarded career, he authored the trilogy WWW (as in Wake, Watch and Wonder) in which a blind teenage girl uses advanced medical technology to augment her vision, only to discover a super-AI consciousness called Webmind that uses the Internet to grow. During the series, Sawyer investigates the possible consequences that such a super-AI could unleash upon society, and how humans might respond. For his perspective on how humanity might relate to future artificial intelligences and what shape those interactions may take, we asked Sawyer about the dynamics of judgment and control; he also shared his overall sentiment on AI development.