Google's What-If Tool And The Future Of Explainable AI
Art exhibition "Waterfall of Meaning" by Google PAIR displayed at the Barbican Curve Gallery. The rise of deep learning has been defined by a shift away from transparent and understandable human-written code towards sealed black boxes whose creators have little understanding of how or even why they yield the results they do. Concerns over bias, brittleness and flawed representations have led to growing interest in the area of "explainable AI" in which frameworks help interrogate a model's internal workings to shed light on precisely what it has learned about the world and help its developers nudge it towards a fairer and more faithful internal representation. As companies like Google roll out a growing stable of explainable AI tools like its What-If Tool, perhaps a more transparent and understandable deep learning future can help address the limitations that have slowed the field's deployment. Since the dawn of the computing revolution, the underlying programming that guided those mechanical thinking machines was provided by humans through transparent and visible instruction sets.
Aug-7-2019, 12:34:47 GMT
- Technology: