Bayesian inference methods for probabilistic topic models can quantify uncertainty in the parameters, which has primarily been used to increase the robustness of parameter estimates. In this work, we explore other rich information that can be obtained by analyzing the posterior distributions in topic models. Experimenting with latent Dirichlet allocation on two datasets, we propose ideas incorporating information about the posterior distributions at the topic level and at the word level. At the topic level, we propose a metric called topic stability that measures the variability of the topic parameters under the posterior. We show that this metric is correlated with human judgments of topic quality as well as with the consistency of topics appearing across multiple models. At the word level, we experiment with different methods for adjusting individual word probabilities within topics based on their uncertainty. Humans prefer words ranked by our adjusted estimates nearly twice as often when compared to the traditional approach. Finally, we describe how the ideas presented in this work could potentially applied to other predictive or exploratory models in future work.
Current recommender systems exploit user and item similarities by collaborative filtering. Some advanced methods also consider the temporal evolution of item ratings as a global background process. However, all prior methods disregard the individual evolution of a user's experience level and how this is expressed in the user's writing in a review community. In this paper, we model the joint evolution of user experience, interest in specific item facets, writing style, and rating behavior. This way we can generate individual recommendations that take into account the user's maturity level (e.g., recommending art movies rather than blockbusters for a cinematography expert). As only item ratings and review texts are observables, we capture the user's experience and interests in a latent model learned from her reviews, vocabulary and writing style. We develop a generative HMM-LDA model to trace user evolution, where the Hidden Markov Model (HMM) traces her latent experience progressing over time -- with solely user reviews and ratings as observables over time. The facets of a user's interest are drawn from a Latent Dirichlet Allocation (LDA) model derived from her reviews, as a function of her (again latent) experience level. In experiments with five real-world datasets, we show that our model improves the rating prediction over state-of-the-art baselines, by a substantial margin. We also show, in a use-case study, that our model performs well in the assessment of user experience levels.
Its venerable phone line wasn't the only newly minted product Apple showed off at the iPhone 8 event on Tuesday. Eddie Cue announced onstage that the company will expand availability of its TV app to seven new countries by the end of the year and will be adding local news and sports programming as well. The TV app will be available in Australia and Canada next month, the spread to Germany, France, Sweden, Norway and the UK by the end of the year. US sports fans (that is, those that live in the country), will be able to track their favorite teams and have Apple TV push an on-screen notification whenever a game starts. By the end of the year, Apple also announced that users will be able to ask Siri directly to switch to a game.
Sonos has unveiled a smart speaker that works with both Google and Amazon's AI assistants. The $199 Sonos One is voice controlled and works with 80 streaming services. It is the first consumer gadget to work with multiple voice AIs. The $199 Sonos One is voice controlled and works with 80 streaming services. The new speaker is driven by two Class-D digital amplifiers, one tweeter, and one mid-woofer.