This book-length article combines several peer reviewed papers and new material to analyze the issues of ethical artificial intelligence (AI). The behavior of future AI systems can be described by mathematical equations, which are adapted to analyze possible unintended AI behaviors and ways that AI designs can avoid them. This article makes the case for utility-maximizing agents and for avoiding infinite sets in agent definitions. It shows how to avoid agent self-delusion using model-based utility functions and how to avoid agents that corrupt their reward generators (sometimes called "perverse instantiation") using utility functions that evaluate outcomes at one point in time from the perspective of humans at a different point in time. It argues that agents can avoid unintended instrumental actions (sometimes called "basic AI drives" or "instrumental goals") by accurately learning human values. This article defines a self-modeling agent framework and shows how it can avoid problems of resource limits, being predicted by other agents, and inconsistency between the agent's utility function and its definition (one version of this problem is sometimes called "motivated value selection"). This article also discusses how future AI will differ from current AI, the politics of AI, and the ultimate use of AI to help understand the nature of the universe and our place in it.
Artificial intelligence researchers at IBM have introduced a major upgrade to the famed Watson computer, allowing it to understand idioms and colloquialisms for the first time. IBM says the update makes it the first commercial AI system capable of identifying, understanding and analysing some of the most challenging aspects of the English language. Phrases like "hardly helpful" and "hot under the collar" are tricky for algorithms to spot, meaning AI is unable to debate complex topics or have nuanced conversations with humans. "Language is a tool for expressing thought and opinion, as much as it is a tool for information," said Rob Thomas, a general manager at IBM Data and AI. "This is why we believe that advancing our ability to capture, analyse, and understand more from language with NLP will help transform how businesses utilise their intellectual capital that is codified in data."
Google's AlphaGo beats Lee Sedol at the game of Go In 2016, major automakers like Tesla and Ford announced timelines for releasing fully-autonomous vehicles. DeepMind's AlphaGo, Google's AI system, beat the world champ Lee Sedol at one of the most complex board games in history. And other major advancements in AI have had big implications in healthcare, with some systems proving more effective in detecting cancer than human doctors. Want to learn what other cool things AI did in 2016? Here are TechRepublic's top picks.
With every passing day, we're reminded that the future is here. Yeah, that's sort of a redundant thing to say. What I really mean is that new innovations and disruptions are popping up every day, and they're materializing at a rate never seen before. I mean, think about the fact that the modern computer, which was created in either 1942 or 1946, depending on who you ask, used to cost a fortune and fill up an entire room. It was almost 50 years before that computer would be reduced to the size of an affordable desktop in 1995.
Parents and experts are increasingly concerned about the damage being done to children by spending too much time looking at screens. The latest warning comes from the Royal College of Paediatrics and Child Health, which suggested that excessive use of screens could bring a whole host of negative outcomes for young people. That includes everything from bad sleep to the potential for cyber bullying, though the organisation warned that the damage might be overestimated. Helpfully, the technology industry is increasingly aware of the same problems and is trying to solve them using products. As concern has grown about the damage their products do, developers have added features that stop other features being used – monitoring how long people spend on their phones, and kicking them off when it gets too much.