The article "Cognitive Hub: The Future of Work" and the supporting infographic (see Figure 1) provides an interesting perspective on some "technology combinations" that could transform the workplace of the future, all enabled by Artificial Intelligence (AI): The infographic above is very cool and depicts a very interesting proposition. However, my concern with the proposition is that while these technology combinations could be quite powerful, the Internet of Things, Human-Machine Interfaces, Cyber physical systems and Artificial Intelligence are only enabling technologies, that is, they only give someone or something the means to do something. You still need someone or something to actually do something; to decide what to do, when to do it, where to do it, with whom to do it, how to do it, the required items to do it, etc. There is a H-U-G-E difference between enabling and doing. For example, I can enable you with an individualized diet and fitness plan that will improve your life, but the subsequent improvement in your life won't happen if you are not doing it.
A more formal discussion explanatory variables is of high practical importance is provided in Section 2. in many disciplines. Recent work exploits stability of regression coefficients or invariance However most state of the art methods suffer from scalability properties of models across different experimental problems since they scan all potential subsets of variables conditions for reconstructing the full causal and test whether the conditional distribution of Y given graph. These approaches generally do not scale a subset of variables is invariant across all environments well with the number of the explanatory variables (Peters et al., 2016) . This search is hence exponential in and are difficult to extend to nonlinear relationships. the number of covariates; the methods, while maintaining Contrary to existing work, we propose an appealing theoretical guarantees, are thus already computationally approach which even works for observational data hard for graphs of ten variables, and get infeasible alone, while still offering theoretical guarantees for larger graphs, unless one resorts to heuristic procedures.
Those algorithms will be discussed in a later section. This paper describes a statistical discovery procedure for finding causal structure in correlational data, called path analysis lasher, 83; Li, 75] and an algorithm that builds path-analytic models automatically, given data. This work has the same goals as research in function finding and other discovery techniques, that is, to find rules, laws, and mechanisms that underlie nonexperimental data [Falkenhainer & Michalski 86; Langley et al., 87; Schaffer, 90; Zytkow et al., 90]. 1 Whereas function finding algorithms produce functional abstractions of (presumably) causal mechanisms, our algorithm produces explicitly causal models. Our work is most similar to that of Glymour et al. , who built the TETRAD system.
The dramatic success in machine learning has led to an explosion of artificial intelligence (AI) applications and increasing expectations for autonomous systems that exhibit human-level intelligence. These expectations have, however, met with fundamental obstacles that cut across many application areas. One such obstacle is adaptability, or robustness. Machine learning researchers have noted current systems lack the ability to recognize or react to new circumstances they have not been specifically programmed or trained for. Intensive theoretical and experimental efforts toward "transfer learning," "domain adaptation," and "lifelong learning"4 are reflective of this obstacle. Another obstacle is "explainability," or that "machine learning models remain mostly black boxes"26 unable to explain the reasons behind their predictions or recommendations, thus eroding users' trust and impeding diagnosis and repair; see Hutson8 and Marcus.11 A third obstacle concerns the lack of understanding of cause-effect connections.
"Correlation does not imply causation" we all know this mantra from statistics. And we think that we fully understand it. Human (and not human) brains, being machines to find patterns, quickly understand that my coffee mug broke because it fell to the floor. One event (the falling) occurred just before the other (mug breaking) and without the first event, we would never see the second. So not only exists a correlation between mugs falling and mugs breaking there is also a causal relation (with lots of physics going on).