Diagnosing and Improving Topic Models by Analyzing Posterior Variability

AAAI Conferences

Bayesian inference methods for probabilistic topic models can quantify uncertainty in the parameters, which has primarily been used to increase the robustness of parameter estimates. In this work, we explore other rich information that can be obtained by analyzing the posterior distributions in topic models. Experimenting with latent Dirichlet allocation on two datasets, we propose ideas incorporating information about the posterior distributions at the topic level and at the word level. At the topic level, we propose a metric called topic stability that measures the variability of the topic parameters under the posterior. We show that this metric is correlated with human judgments of topic quality as well as with the consistency of topics appearing across multiple models. At the word level, we experiment with different methods for adjusting individual word probabilities within topics based on their uncertainty. Humans prefer words ranked by our adjusted estimates nearly twice as often when compared to the traditional approach. Finally, we describe how the ideas presented in this work could potentially applied to other predictive or exploratory models in future work.


Item Recommendation with Evolving User Preferences and Experience

arXiv.org Machine Learning

Current recommender systems exploit user and item similarities by collaborative filtering. Some advanced methods also consider the temporal evolution of item ratings as a global background process. However, all prior methods disregard the individual evolution of a user's experience level and how this is expressed in the user's writing in a review community. In this paper, we model the joint evolution of user experience, interest in specific item facets, writing style, and rating behavior. This way we can generate individual recommendations that take into account the user's maturity level (e.g., recommending art movies rather than blockbusters for a cinematography expert). As only item ratings and review texts are observables, we capture the user's experience and interests in a latent model learned from her reviews, vocabulary and writing style. We develop a generative HMM-LDA model to trace user evolution, where the Hidden Markov Model (HMM) traces her latent experience progressing over time -- with solely user reviews and ratings as observables over time. The facets of a user's interest are drawn from a Latent Dirichlet Allocation (LDA) model derived from her reviews, as a function of her (again latent) experience level. In experiments with five real-world datasets, we show that our model improves the rating prediction over state-of-the-art baselines, by a substantial margin. We also show, in a use-case study, that our model performs well in the assessment of user experience levels.


Alexa, How Do You Really Work?

Slate

On this week's If Then, Slate's April Glaser and Will Oremus discuss the outrage at the largest TV-station owner in the country--Sinclair Broadcasting--after the media conglomerate forced its local-news anchors to read a script that echoes Trumpian talking points. They also unpack Trump's beef about Jeff Bezos owning what he calls the #AmazonWashingtonPost. Meanwhile, music streaming site Spotify went public this week in a totally new kind of way. The hosts take a look at its unorthodox move and what it means for the company's future.


Factorization with Uncertainty and Missing Data: Exploiting Temporal Coherence

Neural Information Processing Systems

The problem of "Structure From Motion" is a central problem in vision: given the 2D locations of certain points we wish to recover the camera motion and the 3D coordinates of the points. Under simplifiedcamera models, the problem reduces to factorizing a measurement matrix into the product of two low rank matrices. Each element of the measurement matrix contains the position of a point in a particular image. When all elements are observed, the problem can be solved trivially using SVD, but in any realistic situation manyelements of the matrix are missing and the ones that are observed have a different directional uncertainty. Under these conditions, most existing factorization algorithms fail while human perception is relatively unchanged. In this paper we use the well known EM algorithm for factor analysis toperform factorization. This allows us to easily handle missing data and measurement uncertainty and more importantly allows us to place a prior on the temporal trajectory of the latent variables (the camera position). We show that incorporating this prior gives a significant improvement in performance in challenging image sequences.


Top 5 Deep Learning and AI Stories- June 1, 2018

#artificialintelligence

Fusing high performance computing and AI 2. Find your next binge-worthy show with AI 3. The connection between self-driving vehicles and radiology 4. Robots are learning new tasks by mimicking humans 5. How AI could spot a silent cancer in time to save lives 5. FUSING HIGH PERFORMANCE COMPUTING AND AI During GTC Taiwan 2018, NVIDIA CEO Jensen Huang announced HGX-2: a "building block" cloud-server platform that will let server manufacturers create more powerful systems around NVIDIA GPUs for high performance computing and AI. TechCrunch's Ron Miller sums it up best, saying that: "It's the stuff that geek dreams are made of. READ ARTICLE 6. FIND YOUR NEXT BINGE-WORTHY SHOW WITH AI While AI may play a leading role in the entertainment industry's depictions of the future on screen, it's already starring in entertainment behind the scenes, thanks to Netflix. Our latest AI Podcast features the company's research and engineering director, Justin Basilico. LISTEN HERE 7. CONNECTING SELF-DRIVING VEHICLES AND RADIOLOGY According to new commentary published in the Journal of American College of Radiology, AI implementation may not be as far as people believe, as seen in self- driving vehicles.