Goto

Collaborating Authors

analogy


Legal Issues Raised by Deploying AI in Healthcare

#artificialintelligence

The theory is that the law should deal with like situations in like ways. The theory is that the law should deal with like situations in like ways. In some respects, however, Artificial Intelligence, especially the concept of machine learning, is virtually unprecedented, so the law is struggling with how to deal with it, or will be soon. Consider a few of the difficulties that the law will probably need to address: Who will pay for healthcare services dependent on AI, and who will be entitled to such payments? Will those payments be keyed to "value," the currently orthodox yardstick?


r/MachineLearning - [R] OpenAI opensources Jukebox, a neural net that generates music

#artificialintelligence

I'm very glad that the article includes a "Limitations" section, because while to most untrained listeners (and even trained listeners), these samples seem miraculous, in reality what is happening is that this is simply a more-impressive version of what has already been available. Specifically, Jukebox is able to provide locally-coherent sounds, which are recognizable as "music", but over long-term horizons it loses large-scale structure. They mention this themselves, and rightly so. While this is very impressive, it is primarily just an exercise in how nice they are able to make their short-term "sentences" sound (to borrow an analogy from speech synthesis). However, the broader challenge of long-term structure and musical form (here an analogy might be novel-length narrative structure) remains an open problem.


Symmetry as an Organizing Principle for Geometric Intelligence

arXiv.org Artificial Intelligence

The exploration of geometrical patterns stimulates imagination and encourages abstract reasoning which is a distinctive feature of human intelligence. In cognitive science, Gestalt principles such as symmetry have often explained significant aspects of human perception. We present a computational technique for building artificial intelligence (AI) agents that use symmetry as the organizing principle for addressing Dehaene's test of geometric intelligence \cite{dehaene2006core}. The performance of our model is on par with extant AI models of problem solving on the Dehaene's test and seems correlated with some elements of human behavior on the same test.


Compass-aligned Distributional Embeddings for Studying Semantic Differences across Corpora

arXiv.org Artificial Intelligence

Word2vec is one of the most used algorithms to generate word embeddings because of a good mix of efficiency, quality of the generated representations and cognitive grounding. However, word meaning is not static and depends on the context in which words are used. Differences in word meaning that depends on time, location, topic, and other factors, can be studied by analyzing embeddings generated from different corpora in collections that are representative of these factors. For example, language evolution can be studied using a collection of news articles published in different time periods. In this paper, we present a general framework to support cross-corpora language studies with word embeddings, where embeddings generated from different corpora can be compared to find correspondences and differences in meaning across the corpora. CADE is the core component of our framework and solves the key problem of aligning the embeddings generated from different corpora. In particular, we focus on providing solid evidence about the effectiveness, generality, and robustness of CADE. To this end, we conduct quantitative and qualitative experiments in different domains, from temporal word embeddings to language localization and topical analysis. The results of our experiments suggest that CADE achieves state-of-the-art or superior performance on tasks where several competing approaches are available, yet providing a general method that can be used in a variety of domains. Finally, our experiments shed light on the conditions under which the alignment is reliable, which substantially depends on the degree of cross-corpora vocabulary overlap.


Neural Analogical Matching

arXiv.org Artificial Intelligence

Analogy is core to human cognition. It allows us to solve problems based on prior experience, it governs the way we conceptualize new information, and it even influences our visual perception. The importance of analogy to humans has made it an active area of research in the broader field of artificial intelligence, resulting in data-efficient models that learn and reason in human-like ways. While analogy and deep learning have generally been considered independently of one another, the integration of the two lines of research seems like a promising step towards more robust and efficient learning techniques. As part of the first steps towards such an integration, we introduce the Analogical Matching Network; a neural architecture that learns to produce analogies between structured, symbolic representations that are largely consistent with the principles of Structure-Mapping Theory.


Using Automated Theorem Provers for Mistake Diagnosis in the Didactics of Mathematics

arXiv.org Artificial Intelligence

The Diproche system, an automated proof checker for natural language proofs specifically adapted to the context of exercises for beginner's students similar to the Naproche system by Koepke, Schröder, Cramer and others, uses a modification of an automated theorem prover which uses common formal fallacies intead of sound deduction rules for mistake diagnosis. We briefly describe the concept of such an'Anti-ATP' and explain the basic techniques used in its implementation. Learning how to prove is one major obstacle of the introductory phase of university education in mathematics. It requires practice, i.e. exercises, which need to be corrected, which is both an expensive and time-consuming task. This limits the way in which corrections can usually enter into the process of solving proof exercises as feedback.


Unleashing The Real Power Of Data

#artificialintelligence

Conferences and vendor marketing materials are full of trite and banal sayings. Say something that seems to be profound, and perhaps they'll think that everything else you have to say is just as profound. One of the common refrains you might hear at many an AI and data-focused event is the pithy statement that "data is the new oil" as if that's supposed to mean something profound. The first time I heard this expression (about a decade ago, I should add), it was an interesting point to make about how "important" and "strategic" data is. But every time I've heard it since, it's bandied about to imply something more than it is.


An Analysis of Word2Vec for the Italian Language

arXiv.org Machine Learning

Word representation is fundamental in NLP tasks, because it is precisely from the coding of semantic closeness between words that it is possible to think of teaching a machine to understand text. Despite the spread of word embedding concepts, still few are the achievements in linguistic contexts other than English. In this work, analysing the semantic capacity of the Word2Vec algorithm, an embedding for the Italian language is produced. Parameter setting such as the number of epochs, the size of the context window and the number of negatively backpropagated samples is explored. Keywords: Word2Vec, Word Embedding, NLP 1 Introduction In order to make human language comprehensible to a computer, it is obviously essential to provide some word encoding.


Coding Dopamine: DeepMind Brings AI To The Footsteps Of Neuroscience

#artificialintelligence

DeepMind has been trying to bridge the gap between AI and biology for quite some time now. All their endeavours revolve around solving the problem of intelligence in machines. The straightforward trivial tasks for humans can be very, very sophisticated and almost for devices. While human brains are hardcoded with millions of years of learning, the machines have many limitations when it comes to data. They can be fed with data that has been documented or prepared by humans, the magnitude of which is historically insignificant when compared to humans.


Learning to See Analogies: A Connectionist Exploration

arXiv.org Artificial Intelligence

This dissertation explores the integration of learning and analogy-making through the development of a computer program, called Analogator, that learns to make analogies by example. By "seeing" many different analogy problems, along with possible solutions, Analogator gradually develops an ability to make new analogies. That is, it learns to make analogies by analogy. This approach stands in contrast to most existing research on analogy-making, in which typically the a priori existence of analogical mechanisms within a model is assumed. The present research extends standard connectionist methodologies by developing a specialized associative training procedure for a recurrent network architecture. The network is trained to divide input scenes (or situations) into appropriate figure and ground components. Seeing one scene in terms of a particular figure and ground provides the context for seeing another in an analogous fashion. After training, the model is able to make new analogies between novel situations. Analogator has much in common with lower-level perceptual models of categorization and recognition; it thus serves as a unifying framework encompassing both high-level analogical learning and low-level perception. This approach is compared and contrasted with other computational models of analogy-making. The model's training and generalization performance is examined, and limitations are discussed.