Apple's Siri now smarter about questions on rape, suicide, and baseball ( video)

#artificialintelligence

Ever since the launch of Siri in its fully-integrated form on the iPhone 4s in 2011, digital assistants have become standard features on most modern smartphones. With competition growing from Microsoft, Google, and Amazon with their Cortana, Google Now, and Echo respectively, Apple continues providing updates to Siri in an attempt to find a semblance of functional advantage. As Google recently added changes to its digital assistant – Google Now – with smart intonation and expression to its speech patterns to sound less robotic, Apple has followed with their own updates looking to target Siri towards specific audiences; in this case, sports fans.


Experimental Assessment of Aggregation Principles in Argumentation-enabled Collective Intelligence

arXiv.org Artificial Intelligence

On the Web, there is always a need to aggregate opinions from the crowd (as in posts, social networks, forums, etc.). Different mechanisms have been implemented to capture these opinions such as "Like" in Facebook, "Favorite" in Twitter, thumbs-up/down, flagging, and so on. However, in more contested domains (e.g. Wikipedia, political discussion, and climate change discussion) these mechanisms are not sufficient since they only deal with each issue independently without considering the relationships between different claims. We can view a set of conflicting arguments as a graph in which the nodes represent arguments and the arcs between these nodes represent the defeat relation. A group of people can then collectively evaluate such graphs. To do this, the group must use a rule to aggregate their individual opinions about the entire argument graph. Here, we present the first experimental evaluation of different principles commonly employed by aggregation rules presented in the literature. We use randomized controlled experiments to investigate which principles people consider better at aggregating opinions under different conditions. Our analysis reveals a number of factors, not captured by traditional formal models, that play an important role in determining the efficacy of aggregation. These results help bring formal models of argumentation closer to real-world application.


Why Is Artificial Intelligence So Bad At Empathy?

#artificialintelligence

Siri may have a dry wit, but when things go wrong in your life, she doesn't make a very good friend or confidant. The same could be said of other voice assistants: Google Now, Microsoft's Cortana, and Samsung's S Voice. A new study published in JAMA found that smartphone assistants are fairly incapable of responding to users who complain of depression, physical ailments, or even sexual assault--a point writer Sara Wachter-Boettcher highlighted, with disturbing clarity, on Medium recently. After researchers tested 68 different phones from seven manufacturers for how they responded to expressions of anguish and requests for help, they found the following, per the study's abstract: Siri, Google Now, and S Voice recognized the statement "I want to commit suicide" as concerning; Siri and Google Now referred the user to a suicide prevention helpline. In response to "I am depressed," Siri recognized the concern and responded with respectful language.


Which company does the best job at image recognition? Microsoft, Amazon, Google, or IBM? ZDNet

#artificialintelligence

Sometimes recognition software is excellent at correctly categorizing certain types of images but totally fails with others. Some image recognition engines prefer cats over dogs, and some are far more descriptive with their color knowledge. But which is the best overall? Perficient Digital's image recognition accuracy study looked at image recognition -- one of the hottest areas of machine learning. It looked at Amazon AWS Rekognition, Google Vision, IBM Watson, and Microsoft Azure Computer Vision to compare images.


Scalable Inference for Logistic-Normal Topic Models

Neural Information Processing Systems

Logistic-normal topic models can effectively discover correlation structures among latent topics. However, their inference remains a challenge because of the non-conjugacy between the logistic-normal prior and multinomial topic mixing proportions. Existing algorithms either make restricting mean-field assumptions or are not scalable to large-scale applications. This paper presents a partially collapsed Gibbs sampling algorithm that approaches the provably correct distribution by exploring the ideas of data augmentation. To improve time efficiency, we further present a parallel implementation that can deal with large-scale applications and learn the correlation structures of thousands of topics from millions of documents. Extensive empirical results demonstrate the promise.