Design Thinking Humanizes Data Science

#artificialintelligence

The article "Cognitive Hub: The Future of Work" and the supporting infographic (see Figure 1) provides an interesting perspective on some "technology combinations" that could transform the workplace of the future, all enabled by Artificial Intelligence (AI): The infographic above is very cool and depicts a very interesting proposition. However, my concern with the proposition is that while these technology combinations could be quite powerful, the Internet of Things, Human-Machine Interfaces, Cyber physical systems and Artificial Intelligence are only enabling technologies, that is, they only give someone or something the means to do something. You still need someone or something to actually do something; to decide what to do, when to do it, where to do it, with whom to do it, how to do it, the required items to do it, etc. There is a H-U-G-E difference between enabling and doing. For example, I can enable you with an individualized diet and fitness plan that will improve your life, but the subsequent improvement in your life won't happen if you are not doing it.


Orthogonal Structure Search for Efficient Causal Discovery from Observational Data

arXiv.org Machine Learning

A more formal discussion explanatory variables is of high practical importance is provided in Section 2. in many disciplines. Recent work exploits stability of regression coefficients or invariance However most state of the art methods suffer from scalability properties of models across different experimental problems since they scan all potential subsets of variables conditions for reconstructing the full causal and test whether the conditional distribution of Y given graph. These approaches generally do not scale a subset of variables is invariant across all environments well with the number of the explanatory variables (Peters et al., 2016) . This search is hence exponential in and are difficult to extend to nonlinear relationships. the number of covariates; the methods, while maintaining Contrary to existing work, we propose an appealing theoretical guarantees, are thus already computationally approach which even works for observational data hard for graphs of ten variables, and get infeasible alone, while still offering theoretical guarantees for larger graphs, unless one resorts to heuristic procedures.


Automating Path Analysis for Building Causal Models from Data: First Results and Open Problems

AAAI Conferences

Those algorithms will be discussed in a later section. This paper describes a statistical discovery procedure for finding causal structure in correlational data, called path analysis lasher, 83; Li, 75] and an algorithm that builds path-analytic models automatically, given data. This work has the same goals as research in function finding and other discovery techniques, that is, to find rules, laws, and mechanisms that underlie nonexperimental data [Falkenhainer & Michalski 86; Langley et al., 87; Schaffer, 90; Zytkow et al., 90]. 1 Whereas function finding algorithms produce functional abstractions of (presumably) causal mechanisms, our algorithm produces explicitly causal models. Our work is most similar to that of Glymour et al. [87], who built the TETRAD system.


The Seven Tools of Causal Inference, with Reflections on Machine Learning

Communications of the ACM

The dramatic success in machine learning has led to an explosion of artificial intelligence (AI) applications and increasing expectations for autonomous systems that exhibit human-level intelligence. These expectations have, however, met with fundamental obstacles that cut across many application areas. One such obstacle is adaptability, or robustness. Machine learning researchers have noted current systems lack the ability to recognize or react to new circumstances they have not been specifically programmed or trained for. Intensive theoretical and experimental efforts toward "transfer learning," "domain adaptation," and "lifelong learning"4 are reflective of this obstacle. Another obstacle is "explainability," or that "machine learning models remain mostly black boxes"26 unable to explain the reasons behind their predictions or recommendations, thus eroding users' trust and impeding diagnosis and repair; see Hutson8 and Marcus.11 A third obstacle concerns the lack of understanding of cause-effect connections.


Causation: The Why Beneath The What

@machinelearnbot

Kevin Gray: If we think about it, most of our daily conversations invoke causation, at least informally. We often say things like "I dropped by this store instead of my usual place because I needed to go to the laundry and it was on the way" or "I always buy chocolate ice cream because that's what my kids like." First, to get started, can you give us nontechnical definitions of causation and causal analysis? Tyler VanderWeele: Well, it turns out that there a number of different contexts in which words like "cause" and "because" are used. Aristotle, in his Physics and again in his Metaphysics, distinguished between what he viewed as four different types of causes: material causes, formal causes, efficient causes, and final causes.