Experimental Assessment of Aggregation Principles in Argumentation-enabled Collective Intelligence

arXiv.org Artificial Intelligence

On the Web, there is always a need to aggregate opinions from the crowd (as in posts, social networks, forums, etc.). Different mechanisms have been implemented to capture these opinions such as "Like" in Facebook, "Favorite" in Twitter, thumbs-up/down, flagging, and so on. However, in more contested domains (e.g. Wikipedia, political discussion, and climate change discussion) these mechanisms are not sufficient since they only deal with each issue independently without considering the relationships between different claims. We can view a set of conflicting arguments as a graph in which the nodes represent arguments and the arcs between these nodes represent the defeat relation. A group of people can then collectively evaluate such graphs. To do this, the group must use a rule to aggregate their individual opinions about the entire argument graph. Here, we present the first experimental evaluation of different principles commonly employed by aggregation rules presented in the literature. We use randomized controlled experiments to investigate which principles people consider better at aggregating opinions under different conditions. Our analysis reveals a number of factors, not captured by traditional formal models, that play an important role in determining the efficacy of aggregation. These results help bring formal models of argumentation closer to real-world application.


The Role of Embodiment and Perspective in Direction-Giving Systems

AAAI Conferences

In this paper, we describe an evaluation of the impact of embodiment, the effect of different kinds of embodiment, and the benefits of different aspects of embodiment, on direction-giving systems. We compared a robot, embodied conversational agent (ECA), and GPS giving directions, when these systems used speaker-perspective gestures, listener-perspective gestures and no gestures. Results demonstrated that, while there was no difference in direction-giving performance between the robot and the ECA, and little difference in participants’perceptions, there was a considerable effect of the type of gesture employed, and several interesting interactions between type of embodiment and aspects of embodiment.


Why Is Artificial Intelligence So Bad At Empathy?

#artificialintelligence

Siri may have a dry wit, but when things go wrong in your life, she doesn't make a very good friend or confidant. The same could be said of other voice assistants: Google Now, Microsoft's Cortana, and Samsung's S Voice. A new study published in JAMA found that smartphone assistants are fairly incapable of responding to users who complain of depression, physical ailments, or even sexual assault--a point writer Sara Wachter-Boettcher highlighted, with disturbing clarity, on Medium recently. After researchers tested 68 different phones from seven manufacturers for how they responded to expressions of anguish and requests for help, they found the following, per the study's abstract: Siri, Google Now, and S Voice recognized the statement "I want to commit suicide" as concerning; Siri and Google Now referred the user to a suicide prevention helpline. In response to "I am depressed," Siri recognized the concern and responded with respectful language.


From Siri to sexbots: Female AI reinforces a toxic desire for passive, agreeable and easily dominated women

#artificialintelligence

A recent article titled "Why is AI Female?" made the connection that gendered labor, in service professions in particular, is fueling our expectations for gendered AI assistants and service robots. Furthermore, the author argues, this "feminizing -- and sexualizing -- of machines" signals a future with a disproportionate use of feminized VR and robots for a male-dominated sex industry. "Sex with robots is a big leap from asking Siri to set an alarm, but the fact that we've largely equated artificial intelligence with female personalities is worth examining. There are, after all, few sexualized male robots or avatars." Herbert Televox and Mr. Telelux, the early 20th century robots made by Westinghouse, were both male.


Scalable Inference for Logistic-Normal Topic Models

Neural Information Processing Systems

Logistic-normal topic models can effectively discover correlation structures among latent topics. However, their inference remains a challenge because of the non-conjugacy between the logistic-normal prior and multinomial topic mixing proportions. Existing algorithms either make restricting mean-field assumptions or are not scalable to large-scale applications. This paper presents a partially collapsed Gibbs sampling algorithm that approaches the provably correct distribution by exploring the ideas of data augmentation. To improve time efficiency, we further present a parallel implementation that can deal with large-scale applications and learn the correlation structures of thousands of topics from millions of documents. Extensive empirical results demonstrate the promise.