Ultravioletto's Neural Mirror shows audiences an AI reflection of themselves


Visitors to a former church in the Italian city of Spoleto will encounter a mirror that uses artificial intelligence and facial recognition to build an otherworldly image of themselves. Italian design studio Ultravioletto created the Neural Mirror installation to give audiences a chance to contemplate the controversial technologies in an artistic setting. The ghostly, rainbow-coloured reflections viewers see of themselves are actually clouds of points generated through artificial intelligence. But at first glance, audiences experience the installation as a normal mirror, as it contains a mirrored film layered over OLED displays that reflects their image back at them. It is only after the facial recognition software has scanned and processed their presence -- decoding the subject's likely sex, age, race and emotional state -- that the viewer sees the AI's interpretation of them on the screen, obscuring the mirror.

From jobs to superjobs


The use of artificial intelligence (AI), cognitive technologies, and robotics to automate and augment work is on the rise, prompting the redesign of jobs in a growing number of domains. The jobs of today are more machine-powered and data-driven than in the past, and they also require more human skills in problem-solving, communication, interpretation, and design. As machines take over repeatable tasks and the work people do becomes less routine, many jobs will rapidly evolve into what we call "superjobs"--the newest job category that changes the landscape of how organizations think about work. During the last few years, many have been alarmed by studies predicting that AI and robotics will do away with jobs. In 2019, this topic remains very much a concern among our Global Human Capital Trends survey respondents.

A.I. Ethics Boards Should Be Based on Human Rights


Who should be on the ethics board of a tech company that's in the business of artificial intelligence (A.I.)? Given the attention to the devastating failure of Google's proposed Advanced Technology External Advisory Council (ATEAC) earlier this year, which was announced and then canceled within a week, it's crucial to get to the bottom of this question. Google, for one, admitted it's "going back to the drawing board." Tech companies are realizing that artificial intelligence changes power dynamics and as providers of A.I. and machine learning systems, they should proactively consider the ethical impacts of their inventions. That's why they're publishing vision documents like "Principles for A.I." when they haven't done anything comparable for previous technologies.

If Machines Want To Make Art, Will Humans Understand It? - Liwaiwai


Assuming that the emergence of consciousness in artificial minds is possible, those minds will feel the urge to create art. But will we be able to understand it? To answer this question, we need to consider two subquestions: when does the machine become an author of an artwork? And how can we form an understanding of the art that it makes? Empathy, we argue, is the force behind our capacity to understand works of art.

Training Machine Learning Models Using Noisy Data - Butterfly Network


Dr. Zaius: I think you're crazy. The concept of a second opinion in medicine is so common that most people take it for granted, especially given a severe diagnosis. Disagreement between two doctors may be due to different levels of expertise, different levels of access to patient information or simply human error. Like all humans, even the world's best doctors make mistakes. At Butterfly, we're building machine learning tools that will act as a second pair of eyes for a doctor and even automate part of their workflow that is laborious or error prone.

Learning Directed Graphical Models from Gaussian Data Machine Learning

In this paper, we introduce two new directed graphical models from Gaussian data: the Gaussian graphical interaction model (GGIM) and the Gaussian graphical conditional expectation model (GGCEM). The development of these models comes from considering stationary Gaussian processes on graphs, and leveraging the equations between the resulting steady-state covariance matrix and the Laplacian matrix representing the interaction graph. Through the presentation of conceptually straightforward theory, we develop the new models and provide interpretations of the edges in each graphical model in terms of statistical measures. We show that when restricted to undirected graphs, the Laplacian matrix representing a GGIM is equivalent to the standard inverse covariance matrix that encodes conditional dependence relationships. We demonstrate that the problem of learning sparse GGIMs and GGCEMs for a given observation set can be framed as a LASSO problem. By comparison with the problem of inverse covariance estimation, we prove a bound on the difference between the covariance matrix corresponding to a sparse GGIM and the covariance matrix corresponding to the $l_1$-norm penalized maximum log-likelihood estimate. In all, the new models present a novel perspective on directed relationships between variables and significantly expand on the state of the art in Gaussian graphical modeling.

Exact and Consistent Interpretation of Piecewise Linear Models Hidden behind APIs: A Closed Form Solution Machine Learning

More and more AI services are provided through APIs on cloud where predictive models are hidden behind APIs. To build trust with users and reduce potential application risk, it is important to interpret how such predictive models hidden behind APIs make their decisions. The biggest challenge of interpreting such predictions is that no access to model parameters or training data is available. Existing works interpret the predictions of a model hidden behind an API by heuristically probing the response of the API with perturbed input instances. However, these methods do not provide any guarantee on the exactness and consistency of their interpretations. In this paper, we propose an elegant closed form solution named \texttt{OpenAPI} to compute exact and consistent interpretations for the family of Piecewise Linear Models (PLM), which includes many popular classification models. The major idea is to first construct a set of overdetermined linear equation systems with a small set of perturbed instances and the predictions made by the model on those instances. Then, we solve the equation systems to identify the decision features that are responsible for the prediction on an input instance. Our extensive experiments clearly demonstrate the exactness and consistency of our method.

A Bayesian Solution to the M-Bias Problem Machine Learning

It is common practice in using regression type models for inferring causal effects, that inferring the correct causal relationship requires extra covariates are included or ``adjusted for''. Without performing this adjustment erroneous causal effects can be inferred. Given this phenomenon it is common practice to include as many covariates as possible, however such advice comes unstuck in the presence of M-bias. M-Bias is a problem in causal inference where the correct estimation of treatment effects requires that certain variables are not adjusted for i.e. are simply neglected from inclusion in the model. This issue caused a storm of controversy in 2009 when Rubin, Pearl and others disagreed about if it could be problematic to include additional variables in models when inferring causal effects. This paper makes two contributions to this issue. Firstly we provide a Bayesian solution to the M-Bias problem. The solution replicates Pearl's solution, but consistent with Rubin's advice we condition on all variables. Secondly the fact that we are able to offer a solution to this problem in Bayesian terms shows that it is indeed possible to represent causal relationships within the Bayesian paradigm, albeit in an extended space. We make several remarks on the similarities and differences between causal graphical models which implement the do-calculus and probabilistic graphical models which enable Bayesian statistics. We hope this work will stimulate more research on unifying Pearl's causal calculus using causal graphical models with traditional Bayesian statistics and probabilistic graphical models.

A General Interpretation of Deep Learning by Affine Transform and Region Dividing without Mutual Interference Machine Learning

This paper mainly deals with the "black-box" problem of deep learning composed of ReLUs with n-dimensional input space, as well as some discussions of sigmoid-unit deep learning. We prove that a region of input space can be transmitted to succeeding layers one by one in the sense of affine transforms; adding a new layer can help to realize the subregion dividing without influencing an excluded region, which is a key distinctive feature of deep leaning. Then constructive proof is given to demonstrate that multi-category data points can be classified by deep learning. Furthermore, we prove that deep learning can approximate an arbitrary continuous function on a closed set of n-dimensional space with arbitrary precision. Finally, generalize some of the conclusions of ReLU deep learning to the case of sigmoid-unit deep learning.

A Computational-Hermeneutic Approach for Conceptual Explicitation Artificial Intelligence

We present a computer-supported approach for the logical analysis and conceptual explicitation of argumentative discourse. Computational hermeneutics harnesses recent progresses in automated reasoning for higher-order logics and aims at formalizing natural-language argumentative discourse using flexible combinations of expressive non-classical logics. In doing so, it allows us to render explicit the tacit conceptualizations implicit in argumentative discursive practices. Our approach operates on networks of structured arguments and is iterative and two-layered. At one layer we search for logically correct formalizations for each of the individual arguments. At the next layer we select among those correct formalizations the ones which honor the argument's dialectic role, i.e. attacking or supporting other arguments as intended. We operate at these two layers in parallel and continuously rate sentences' formalizations by using, primarily, inferential adequacy criteria. An interpretive, logical theory will thus gradually evolve. This theory is composed of meaning postulates serving as explications for concepts playing a role in the analyzed arguments. Such a recursive, iterative approach to interpretation does justice to the inherent circularity of understanding: the whole is understood compositionally on the basis of its parts, while each part is understood only in the context of the whole (hermeneutic circle). We summarily discuss previous work on exemplary applications of human-in-the-loop computational hermeneutics in metaphysical discourse. We also discuss some of the main challenges involved in fully-automating our approach. By sketching some design ideas and reviewing relevant technologies, we argue for the technological feasibility of a highly-automated computational hermeneutics.