Goto

Collaborating Authors

CES for Marketers: Alexa Wows, Virtual Reality Underwhelms

#artificialintelligence

Over the past few years the CES trade show has become a familiar post-holidays pilgrimage for many of the country's biggest marketers. They see the event as a way to get a sneak peek at the latest tech gadgets and technologies that can help them engage with their customers. This year marketing executives from companies such as Coca-Cola, Unilever, Johnson & Johnson, Campbell Soup and PepsiCo Inc. made their way to Las Vegas for the gathering. The convention was jam-packed with everything from self-driving cars to robots that play chess to Procter & Gamble's air-freshener spray that can connect with Alphabet Inc.'s Nest home to automatically release pleasant scents in the home. But there was one category that seemed to especially win over marketers: virtual assistants.


Amazon is poorly vetting Alexa's user-submitted answers

#artificialintelligence

Alexa, Google Assistant, Siri, and Cortana can answer all sorts of questions that pop into users' heads, and they're improving every day. But what happens when a company like Amazon decides to crowdsource answers to fill gaps in its platform's knowledge? The result can range from amusing and perplexing to concerning. Alexa Answers allows any Amazon customer to submit responses to unanswered questions. When the web service launched in general availability a few weeks ago, Amazon gave assurances that submissions would be policed through a combination of automatic and manual review.


The Arrival of Artificially Intelligent Beer

#artificialintelligence

The term "machine learning" covers a grab bag of algorithms, techniques, and technology that are by now pretty much everywhere in modern life. However, machine intelligence has recently started to be used not just for identifying problems but to build better products. Amongst the first is the world's only beers brewed with the help of machine intelligence, which went on sale a few weeks ago. The machine learning algorithms uses a combination of reinforcement learning and bayesian optimisation to assist the brewer in deciding how to change the recipe of the beer, with the algorithms learning from experience and customer feedback. Perhaps the most obvious intrusion of machine learning into the physical world is the voice recognition that drives Apple's Siri, or Amazon's Alexa.


Domain Term Extraction and Structuring via Link Analysis

AAAI Conferences

Domain ontologies contain information about the important concepts in a domain, the associated attributes and the relationships between various concepts. The manual creation of domain ontologies is an expensive and time consuming process. In this paper, we present an approach to the automatic extraction of domain ontologies from domainspecific text. This approach uses dependency relations between terms to represent text as a graph. The graph based ranking algorithm HITS is used to identify domain keywords and to structure them as concepts and attributes. Experiments on two different domains, digital cameras and wines, show that our method performs well in comparison to other approaches.


Learning to Memorize in Neural Task-Oriented Dialogue Systems

arXiv.org Artificial Intelligence

In this thesis, we leverage the neural copy mechanism and memory-augmented neural networks (MANNs) to address existing challenge of neural task-oriented dialogue learning. We show the effectiveness of our strategy by achieving good performance in multi-domain dialogue state tracking, retrieval-based dialogue systems, and generation-based dialogue systems. We first propose a transferable dialogue state generator (TRADE) that leverages its copy mechanism to get rid of dialogue ontology and share knowledge between domains. We also evaluate unseen domain dialogue state tracking and show that TRADE enables zero-shot dialogue state tracking and can adapt to new few-shot domains without forgetting the previous domains. Second, we utilize MANNs to improve retrieval-based dialogue learning. They are able to capture dialogue sequential dependencies and memorize long-term information. We also propose a recorded delexicalization copy strategy to replace real entity values with ordered entity types. Our models are shown to surpass other retrieval baselines, especially when the conversation has a large number of turns. Lastly, we tackle generation-based dialogue learning with two proposed models, the memory-to-sequence (Mem2Seq) and global-to-local memory pointer network (GLMP). Mem2Seq is the first model to combine multi-hop memory attention with the idea of the copy mechanism. GLMP further introduces the concept of response sketching and double pointers copying. We show that GLMP achieves the state-of-the-art performance on human evaluation.