If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
An MIT report estimates truly autonomous vehicles might not hit the streets for a decade. And when they do, it's difficult to say whether they will fully accommodate all riders, including those with disabilities. Driverless car technology promises to remove barriers to personal transportation, but few self-driving operators have made headway on solutions for customers with mobility, vision, and hearing impairments, including seniors and those with chronic health conditions. Some companies are further along than others. Alphabet's Waymo is engaged with collaborators -- including the Foundation for Senior Living in Phoenix and the Foundation for Blind Children -- in an effort to ensure its vehicles remain accessible.
Google's AI can now identify food in the supermarket, in a move designed to help the visually impaired. It is part of Google's Lookout app, which aims to help those with low or no vision identify things around them. A new update has added the ability for a computer voice to say aloud what food it thinks a person is holding based on its visual appearance. One UK blindness charity welcomed the move, saying it could help boost blind people's independence. Google says the feature will "be able to distinguish between a can of corn and a can of green beans".
The Fourth of July is a special US holiday for a few important reasons. First, it is a celebration of the birth of our nation filled with symbolism, patriotism, picnics, guilt-free potato salad binging, and lots of noisy fireworks. Secondly, it signifies the unofficial start of summer, when schools are closed, families take leisurely vacations, and flip flops overtake work shoes as the footwear of choice. Last and most importantly, the July 4th holiday symbolizes a spirit of independence that is deeply embedded in the DNA of our country. It's a day when many of us, consciously or unconsciously, reflect upon the importance that freedom and self-reliance play in our lives.
Spirtes, Glymour and Scheines formulated a Conjecture that a direct dependence test and a head-to-head meeting test would suffice to construe directed acyclic graph decompositions of a joint probability distribution (Bayesian network) for which Pearl's d-separation applies. This Conjecture was later shown to be a direct consequence of a result of Pearl and Verma. This paper is intended to prove this Conjecture in a new way, by exploiting the concept of p-d-separation (partial dependency separation). While Pearl's d-separation works with Bayesian networks, p-d-separation is intended to apply to causal networks: that is partially oriented networks in which orientations are given to only to those edges, that express statistically confirmed causal influence, whereas undirected edges express existence of direct influence without possibility of determination of direction of causation. As a consequence of the particular way of proving the validity of this Conjecture, an algorithm for construction of all the directed acyclic graphs (dags) carrying the available independence information is also presented. The notion of a partially oriented graph (pog) is introduced and within this graph the notion of p-d-separation is defined. It is demonstrated that the p-d-separation within the pog is equivalent to d-separation in all derived dags.
In this study, we propose a machine-learning-based approach to identify the modal parameters of the output only data for structural health monitoring (SHM) that makes full use of the characteristic of independence of modal responses and the principle of machine learning. By taking advantage of the independence feature of each mode, we use the principle of unsupervised learning, making the training process of the deep neural network becomes the process of modal separation. A self-coding deep neural network is designed to identify the structural modal parameters from the vibration data of structures. The mixture signals, that is, the structural response data, are used as the input of the neural network. Then we use a complex cost function to restrict the training process of the neural network, making the output of the third layer the modal responses we want, and the weights of the last two layers are mode shapes. The deep neural network is essentially a nonlinear objective function optimization problem. A novel loss function is proposed to constrain the independent feature with consideration of uncorrelation and non-Gaussianity to restrict the designed neural network to obtain the structural modal parameters. A numerical example of a simple structure and an example of actual SHM data from a cable-stayed bridge are presented to illustrate the modal parameter identification ability of the proposed approach. The results show the approach s good capability in blindly extracting modal information from system responses.
Deep generative models reproduce complex empirical data but cannot extrapolate to novel environments. An intuitive idea to promote extrapolation capabilities is to enforce the architecture to have the modular structure of a causal graphical model, where one can intervene on each module independently of the others in the graph. We develop a framework to formalize this intuition, using the principle of Independent Causal Mechanisms, and show how over-parameterization of generative neural networks can hinder extrapolation capabilities. Our experiments on the generation of human faces shows successive layers of a generator architecture implement independent mechanisms to some extent, allowing meaningful extrapolations. Finally, we illustrate that independence of mechanisms may be enforced during training to improve extrapolation.
Several important families of computational and statistical results in machine learning and randomized algorithms rely on uniform bounds on quadratic forms of random vectors or matrices. Such results include the Johnson-Lindenstrauss (J-L) Lemma, the Restricted Isometry Property (RIP), randomized sketching algorithms, and approximate linear algebra. The existing results critically depend on statistical independence, e.g., independent entries for random vectors, independent rows for random matrices, etc., which prevent their usage in dependent or adaptive modeling settings. In this paper, we show that such independence is in fact not needed for such results which continue to hold under fairly general dependence structures. In particular, we present uniform bounds on random quadratic forms of stochastic processes which are conditionally independent and sub-Gaussian given another (latent) process.
We aim to separate the generative factors of data into two latent vectors in a variational autoencoder. One vector captures class factors relevant to target classification tasks, while the other vector captures style factors relevant to the remaining information. To learn the discrete class features, we introduce supervision using a small amount of labeled data, which can simply yet effectively reduce the effort required for hyperparameter tuning performed in existing unsupervised methods. Furthermore, we introduce a learning objective to encourage statistical independence between the vectors. We show that (i) this vector independence term exists within the result obtained on decomposing the evidence lower bound with multiple latent vectors, and (ii) encouraging such independence along with reducing the total correlation within the vectors enhances disentanglement performance. Experiments conducted on several image datasets demonstrate that the disentanglement achieved via our method can improve classification performance and generation controllability.
The European Parliament's proposal to create a new legal status for artificial intelligence (AI) and robots brought into focus the idea of electronic legal personhood. This discussion, however, is hugely controversial. While some scholars argue that the proposed status could contribute to the coherence of the legal system, others say that it is neither beneficial nor desirable. Notwithstanding this prospect, we conducted a survey (N=3315) to understand online users' perceptions of the legal personhood of AI and robots. We observed how the participants assigned responsibility, awareness, and punishment to AI, robots, humans, and various entities that could be held liable under existing doctrines. We also asked whether the participants thought that punishing electronic agents fulfills the same legal and social functions as human punishment. The results suggest that even though people do not assign any mental state to electronic agents and are not willing to grant AI and robots physical independence or assets, which are the prerequisites of criminal or civil liability, they do consider them responsible for their actions and worthy of punishment. The participants also did not think that punishment or liability of these entities would achieve the primary functions of punishment, leading to what we define as the punishment gap. Therefore, before we recognize electronic legal personhood, we must first discuss proper methods of satisfying the general population's demand for punishment.
In the univariate case, we show that by comparing the individual complexities of univariate cause and effect, one can identify the cause and the effect, without considering their interaction at all. In our framework, complexities are captured by the reconstruction error of an autoencoder that operates on the quantiles of the distribution. Comparing the reconstruction errors of the two autoencoders, one for each variable, is shown to perform surprisingly well on the accepted causality directionality benchmarks. Hence, the decision as to which of the two is the cause and which is the effect may not be based on causality but on complexity. In the multivariate case, where one can ensure that the complexities of the cause and effect are balanced, we propose a new adversarial training method that mimics the disentangled structure of the causal model. We prove that in the multidimensional case, such modeling is likely to fit the data only in the direction of causality. Furthermore, a uniqueness result shows that the learned model is able to identify the underlying causal and residual (noise) components. Our multidimensional method outperforms the literature methods on both synthetic and real world datasets.