Plotting

Results


Boosting Search Engines with Interactive Agents

arXiv.org Artificial Intelligence

Can machines learn to use a search engine as an interactive tool for finding information? That would have far reaching consequences for making the world's knowledge more accessible. This paper presents first steps in designing agents that learn meta-strategies for contextual query refinements. Our approach uses machine reading to guide the selection of refinement terms from aggregated search results. Agents are then empowered with simple but effective search operators to exert fine-grained and transparent control over queries and search results. We develop a novel way of generating synthetic search sessions, which leverages the power of transformer-based generative language models through (self-)supervised learning. We also present a reinforcement learning agent with dynamically constrained actions that can learn interactive search strategies completely from scratch. In both cases, we obtain significant improvements over one-shot search with a strong information retrieval baseline. Finally, we provide an in-depth analysis of the learned search policies.


Buster Posey explains why robot umps could call more balls than strike

#artificialintelligence

But it's something being implemented and tested in the baseball world. The independent Atlantic League was the first victim to test the newest technology that includes a real-life umpire still manning his or her duties behind the plate while they wear an earpiece connected to an iPhone. That person would then relay the call from the TrackMan computer system that uses Doppler radar. That's at least how plate umpire Brian deBrauwere executed it back in July as he described it to ESPN. And Giants catcher Buster Posey isn't too sure about this new technology, specifically if these robot umps would call more balls or strikes.


Topic Modeling with Wasserstein Autoencoders

arXiv.org Artificial Intelligence

We propose a novel neural topic model in the Wasserstein autoencoders (WAE) framework. Unlike existing variational autoencoder based models, we directly enforce Dirichlet prior on the latent document-topic vectors. We exploit the structure of the latent space and apply a suitable kernel in minimizing the Maximum Mean Discrepancy (MMD) to perform distribution matching. We discover that MMD performs much better than the Generative Adversarial Network (GAN) in matching high dimensional Dirichlet distribution. We further discover that incorporating randomness in the encoder output during training leads to significantly more coherent topics. To measure the diversity of the produced topics, we propose a simple topic uniqueness metric. Together with the widely used coherence measure NPMI, we offer a more wholistic evaluation of topic quality. Experiments on several real datasets show that our model produces significantly better topics than existing topic models.


Neural Consciousness Flow

arXiv.org Artificial Intelligence

The ability of reasoning beyond data fitting is substantial to deep learning systems in order to make a leap forward towards artificial general intelligence. A lot of efforts have been made to model neural-based reasoning as an iterative decision-making process based on recurrent networks and reinforcement learning. Instead, inspired by the consciousness prior proposed by Yoshua Bengio, we explore reasoning with the notion of attentive awareness from a cognitive perspective, and formulate it in the form of attentive message passing on graphs, called neural consciousness flow (NeuCFlow). Aiming to bridge the gap between deep learning systems and reasoning, we propose an attentive computation framework with a three-layer architecture, which consists of an unconsciousness flow layer, a consciousness flow layer, and an attention flow layer. We implement the NeuCFlow model with graph neural networks (GNNs) and conditional transition matrices. Our attentive computation greatly reduces the complexity of vanilla GNN-based methods, capable of running on large-scale graphs. We validate our model for knowledge graph reasoning by solving a series of knowledge base completion (KBC) tasks. The experimental results show NeuCFlow significantly outperforms previous state-of-the-art KBC methods, including the embedding-based and the path-based. The reproducible code can be found by the link below.


Feature and TV films

Los Angeles Times

Mr. Smith Goes to Washington 1939 TCM Tue. 7 p.m. Mean Streets 1973 Cinemax Sun. 6 a.m. Batman Begins 2005 AMC Sun. Throw Momma From the Train 1987 EPIX Sun. Die Hard 1988 IFC Sun. I Know What You Did Last Summer 1997 Starz Tue. Gone in 60 Seconds 2000 CMT Wed. 8 p.m., Thur. Total Recall 1990 Encore Thur. 2 a.m. A Fish Called Wanda 1988 Encore Thur. 2 p.m., 9 p.m. The World Is Not Enough 1999 EPIX Sat. 4 p.m. Look Who's Talking 1989 OVA Sun. Die Hard With a Vengeance 1995 IFC Thur. Oil-platform workers, including an estranged couple, and a Navy SEAL make a startling deep-sea discovery. A clueless politician falls in love with a waitress whose erratic behavior is caused by a nail stuck in her head. After glimpsing his future, an ambitious politician battles the agents of Fate itself to be with the woman he loves. To help a friend, a suburban baby sitter drives into downtown Chicago with her two charges and a neighbor. Two teenage baby sitters and a group of children spend a wild night ...


How Netflix's AI Saves It 1 Billion Every Year -- The Motley Fool

#artificialintelligence

When you think of leaders in artificial intelligence, Netflix (NASDAQ:NFLX) doesn't usually jump to the top of the list. But the streaming video service's VP of Product Innovation Carlos Uribe-Gomez and Chief Product Officer Neil Hunt published a paper that says some of its AI algorithms save Netflix 1 billion each year. In their paper, the two Netflix execs detail how the company's recommendation engine impacts its churn rate. Netflix no longer reports its churn rate, but the paper notes that Netflix's "retention rates are already high enough that it takes a very meaningful improvement to make a retention difference of even 0.1%." Let's dive into how the recommendation engine saves Netflix money -- and what the return on investment looks like.