Collaborating Authors

The Pragmatic Turn in Explainable Artificial Intelligence (XAI) Artificial Intelligence

In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post-hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post-hoc interpretability that seems to be predominant in most recent literature.

Query Understanding, Divided into Three Parts – Daniel Tunkelang – Medium


Like Rome, query understanding can't be built in one day. Implementing holistic understanding, reductionist understanding, and resolution is a lot of work, and as a search team you can always find room to improve all of these. But if you're not already looking at query understanding in this framework -- or if you're not looking at query understanding at all -- I urge you to consider it. It won't reduce the challenges, but it will help you tackle them in stages.

Rescuing Machine Learning with Symbolic AI for Language Understanding -


From your average technology consumer to some of the most sophisticated organizations, it is amazing how many people think machine learning is artificial intelligence or consider it the best of AI. This perception persists mostly because of the general public's fascination with deep learning and neural networks, which several people regard as the most cutting-edge deployments of modern AI. The reality, however, is much more complex. There are certainly use cases in which machine learning is very capable. For example, it works well for computer vision applications of image recognition or object detection.

What Does It Mean for AI to Understand?


Remember IBM's Watson, the AI Jeopardy! A 2010 promotion proclaimed, "Watson understands natural language with all its ambiguity and complexity." However, as we saw when Watson subsequently failed spectacularly in its quest to "revolutionize medicine with artificial intelligence," a veneer of linguistic facility is not the same as actually comprehending human language. Natural language understanding has long been a major goal of AI research. At first, researchers tried to manually program everything a machine would need to make sense of news stories, fiction or anything else humans might write.