fagnan
Weighing the Trade-Offs of Explainable AI
In 1997, IBM supercomputer Deep Blue made a move against chess champion Garry Kasparov that left him stunned. The computer's choice to sacrifice one of its pieces seemed so inexplicable to Kasparov that he assumed it was a sign of the machine's superior intelligence. Shaken, he went on to resign his series against the computer, even though he had the upper hand. Fifteen years later, however, one of Deep Blue's designers revealed that fateful move wasn't the sign of advanced machine intelligence -- it was the result of a bug. Today, no human can beat a computer at chess, but the story still underscores just how easy it is to blindly trust AI when you don't know what's going on.
Why explainable AI is indispensable to Zillow's business
Zillow, an online marketplace that facilitates the buying, selling, renting, financing, and remodeling of homes, employs lots of AI technologies to do things like estimate home prices. But the output of AI systems like these can be opaque, creating a "black box" problem where practitioners and customers can't audit the systems properly. Without transparency, serious problems like algorithmic bias can persist undetected, and trust in the models becomes impossible. For obvious ethical reasons, this is why explainable AI (XAI) is so crucial to the creation and deployment of AI systems, but pragmatically, it's also key to the success of AI-powered products and services from companies like Zillow. David Fagnan, director of applied science on the Zillow Offers team, discussed with VentureBeat how and why XAI is indispensable for the company.