Autonomous cars use a variety of technologies like radar, lidar, odometry and computer vision to detect objects and people on the road, prompting it to adjust its trajectory accordingly. To tackle this problem, electrical engineers from University of California, San Diego used powerful machine learning techniques in a recent experiment that incorporated so-called deep learning algorithms in a pedestrian-detection system that performs in near real-time, using visual data only. The findings, which were presented at the International Conference on Computer Vision in Santiago, Chile, are an improvement over current methods of pedestrian detection, which uses something called cascade detection. This traditional form of classification architecture in computer vision takes a multi-stage approach that first breaks down an image into smaller image windows. These sub-images are then processed by whether they contain the presence of a pedestrian or not, using markers like shape and color.
State-of-the-art approaches to partially observable planning like POMCP are based on stochastic tree search. While these approaches are computationally efficient, they may still construct search trees of considerable size, which could limit the performance due to restricted memory resources. In this paper, we propose Partially Observable Stacked Thompson Sampling (POSTS), a memory bounded approach to open-loop planning in large POMDPs, which optimizes a fixed size stack of Thompson Sampling bandits. We empirically evaluate POSTS in four large benchmark problems and compare its performance with different tree-based approaches. We show that POSTS achieves competitive performance compared to tree-based open-loop planning and offers a performance-memory tradeoff, making it suitable for partially observable planning with highly restricted computational and memory resources.
Video: How Mozilla plans to win back Firefox users. Developers at Firefox maker Mozilla are working on a new app dubbed'Scout', which aims to bring voice to the web and offer a voice assistant along the lines of Amazon Alexa or Apple's Siri. Spotted by ZDNet sister site CNET, Mozilla revealed the Scout project in a session description for a talk at its All Hands 2018 meeting in San Francisco this week. The session covering Technical Stack Requirements For A Voice Browser suggests how the Scout app might work. "Hey Scout, read me the article about polar bears," it says.
Large scale search advertising systems have many challenges in Natural Language Understanding and Computer Vision areas such as query and ads understanding, semantic representation, fast ads retrieval and relevance modeling, product image understanding and product detection. In his insightful talk, Bruce Zhang from Microsoft AI & Research will walk us through these various challenges and share how the Microsoft team has developed and deployed cutting-edge technologies, based on deep learning and ads domain data, in their Ads stack to improve ad quality and increase Revenue Per 1000 search (RPM). In addition, he will also share deep learning techniques used in Bing Ads such as query/ads semantic embedding models and KNN search service, query tagging model, generative models for query rewriting, DNN based query-keyword relevance model, visual product recognition models, product detection and description generation models for Product Ads. Who is this talk for? If your work touches machine learning, this talk is for you.