judgment


MIT researchers develop robot that can learn to identify objects based on sight and touch

Daily Mail - Science & tech

Robots are getting closer to being able to see and feel the physical world. A team of researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) developed AI software that's capable of predicting what an object will look like or feel like by using'sight' and'touch.' The study could help humans and machines work together more seamlessly in the workplace, researchers say. Robots are getting closer to being more aware of their surroundings. MIT researchers developed AI that's capable of identifying an object by using'sight' and'touch' Their findings also bring robots closer to emulating a common function of the human brain: When humans look at an object, they can often anticipate what it will feel like, i.e. hard, soft, flexible, etc.


What Can AI Do to Help Us Age Better?

#artificialintelligence

New technology devices and apps pop up as abundantly as summer weeds here in Silicon Valley. Chip-enhanced products offer to satisfy almost every need imaginable. Prompts from your smart refrigerator tell you to buy more milk. With a voice command, music plays to facilitate meditation, thanks to your smart -- always on -- helper who listens for your next query from a canister on your kitchen counter; you know, the one with a woman's voice and name. In this glut of offerings, how do you select what is truly useful from what is simply the latest "smart" thing?


The Computational Structure of Unintentional Meaning

arXiv.org Artificial Intelligence

Speech-acts can have literal meaning as well as pragmatic meaning, but these both involve consequences typically intended by a speaker. Speech-acts can also have unintentional meaning, in which what is conveyed goes above and beyond what was intended. Here, we present a Bayesian analysis of how, to a listener, the meaning of an utterance can significantly differ from a speaker's intended meaning. Our model emphasizes how comprehending the intentional and unintentional meaning of speech-acts requires listeners to engage in sophisticated model-based perspective-taking and reasoning about the history of the state of the world, each other's actions, and each other's observations. To test our model, we have human participants make judgments about vignettes where speakers make utterances that could be interpreted as intentional insults or unintentional faux pas. In elucidating the mechanics of speech-acts with unintentional meanings, our account provides insight into how communication both functions and malfunctions.


Metric Learning for Individual Fairness

arXiv.org Machine Learning

There has been much discussion recently about how fairness should be measured or enforced in classification. Individual Fairness [Dwork, Hardt, Pitassi, Reingold, Zemel, 2012], which requires that similar individuals be treated similarly, is a highly appealing definition as it gives strong guarantees on treatment of individuals. Unfortunately, the need for a task-specific similarity metric has prevented its use in practice. In this work, we propose a solution to the problem of approximating a metric for Individual Fairness based on human judgments. Our model assumes that we have access to a human fairness arbiter, who can answer a limited set of queries concerning similarity of individuals for a particular task, is free of explicit biases and possesses sufficient domain knowledge to evaluate similarity. Our contributions include definitions for metric approximation relevant for Individual Fairness, constructions for approximations from a limited number of realistic queries to the arbiter on a sample of individuals, and learning procedures to construct hypotheses for metric approximations which generalize to unseen samples under certain assumptions of learnability of distance threshold functions.


Morality In The Age Of Artificial Intelligence: Why Do We Need Wisdom To Lead In The Future?

#artificialintelligence

Have you ever come across someone who possesses an impressive quality of inner cohesion? The kind that made you want to be around them for longer than you had intended to. Their crawling energy and the healing impact on your soul made you wonder'who is this person', 'what do they do', 'how do I become more like them'? These are the kind of wise hearts that give many close and far the breath of life every day. These are the kind of wise hearts I aspire my heart to become a future leader... Wisdom is the sixth quality of eight core human attributes we found in our research with Stanford University's CCARE 21st century leaders should possess on their path to developing resilience and leading organizations of adaptability.


Automatic Evaluation of Local Topic Quality

arXiv.org Machine Learning

Topic models are typically evaluated with respect to the global topic distributions that they generate, using metrics such as coherence, but without regard to local (token-level) topic assignments. Token-level assignments are important for downstream tasks such as classification. Even recent models, which aim to improve the quality of these token-level topic assignments, have been evaluated only with respect to global metrics. We propose a task designed to elicit human judgments of token-level topic assignments. We use a variety of topic model types and parameters and discover that global metrics agree poorly with human assignments. Since human evaluation is expensive we propose a variety of automated metrics to evaluate topic models at a local level. Finally, we correlate our proposed metrics with human judgments from the task on several datasets. We show that an evaluation based on the percent of topic switches correlates most strongly with human judgment of local topic quality. We suggest that this new metric, which we call consistency, be adopted alongside global metrics such as topic coherence when evaluating new topic models.


Markov versus quantum dynamic models of belief change during evidence monitoring

arXiv.org Artificial Intelligence

Two different dynamic models for belief change during evidence monitoring were evaluated: Markov and quantum. They were empirically tested with an experiment in which participants monitored evidence for an initial period of time, made a probability rating, then monitored more evidence, before making a second rating. The models were qualitatively tested by manipulating the time intervals in a manner that provided a test for interference effects of the first rating on the second. The Markov model predicted no interference whereas the quantum model predicted interference. A quantitative comparison of the two models was also carried out using a generalization criterion method: the parameters were fit to data from one set of time intervals, and then these same parameters were used to predict data from another set of time intervals. The results indicated that some features of both Markov and quantum models are needed to accurately account for the results.


Explaining intuitive difficulty judgments by modeling physical effort and risk

arXiv.org Artificial Intelligence

The ability to estimate task difficulty is critical for many real-world decisions such as setting appropriate goals for ourselves or appreciating others' accomplishments. Here we give a computational account of how humans judge the difficulty of a range of physical construction tasks (e.g., moving 10 loose blocks from their initial configuration to their target configuration, such as a vertical tower) by quantifying two key factors that influence construction difficulty: physical effort and physical risk. Physical effort captures the minimal work needed to transport all objects to their final positions, and is computed using a hybrid task-and-motion planner. Physical risk corresponds to stability of the structure, and is computed using noisy physics simulations to capture the costs for precision (e.g., attention, coordination, fine motor movements) required for success. We show that the full effort-risk model captures human estimates of difficulty and construction time better than either component alone.


Artificial Consciousness and Security

arXiv.org Artificial Intelligence

This paper describes a possible way to improve computer security by implementing a program which implements the following three features related to a weak notion of artificial consciousness: (partial) self-monitoring, ability to compute the truth of quantifier-free propositions and the ability to communicate with the user. The integrity of the program could be enhanced by using a trusted computing approach, that is to say a hardware module that is at the root of a chain of trust. This paper outlines a possible approach but does not refer to an implementation (which would need further work), but the author believes that an implementation using current processors, a debugger, a monitoring program and a trusted processing module is currently possible.


Survey on Evaluation Methods for Dialogue Systems

arXiv.org Artificial Intelligence

In this paper we survey the methods and concepts developed for the evaluation of dialogue systems. Evaluation is a crucial part during the development process. Often, dialogue systems are evaluated by means of human evaluations and questionnaires. However, this tends to be very cost and time intensive. Thus, much work has been put into finding methods, which allow to reduce the involvement of human labour. In this survey, we present the main concepts and methods. For this, we differentiate between the various classes of dialogue systems (task-oriented dialogue systems, conversational dialogue systems, and question-answering dialogue systems). We cover each class by introducing the main technologies developed for the dialogue systems and then by presenting the evaluation methods regarding this class.