Internet of Things and Bayesian Networks

@machinelearnbot

As big data becomes more of cliche with every passing day, do you feel Internet of Things is the next marketing buzzword to grapple our lives. So what exactly is Internet of Thing (IoT) and why are we going to hear more about it in the coming days. Internet of thing (IoT) today denotes advanced connectivity of devices,systems and services that goes beyond machine to machine communications and covers a wide variety of domains and applications specifically in the manufacturing and power, oil and gas utilities. An application in IoT can be an automobile that has built in sensors to alert the driver when the tyre pressure is low. Built-in sensors on equipment's present in the power plant which transmit real time data and thereby enable to better transmission planning,load balancing.


Text Mining Support in Semantic Annotation and Indexing of Multimedia Data

AAAI Conferences

This short paper is describing a demonstrator that is complementing the paper "Towards Cross-Media Feature Extraction" in these proceedings. The demo is exemplifying the use of textual resources, out of which semantic information can be extracted, for supporting the semantic annotation and indexing of associated video material in the soccer domain. Entities and events extracted from textual data are marked-up with semantic classes derived from an ontology modeling the soccer domain. We show further how extracted Audio-Video features by video analysis can be taken into account for additional annotation of specific soccer event types, and how those different types of annotation can be combined.


Robust Maximum Likelihood Estimation of Sparse Vector Error Correction Model

arXiv.org Machine Learning

In econometrics and finance, the vector error correction model (VECM) is an important time series model for cointegration analysis, which is used to estimate the long-run equilibrium variable relationships. The traditional analysis and estimation methodologies assume the underlying Gaussian distribution but, in practice, heavy-tailed data and outliers can lead to the inapplicability of these methods. In this paper, we propose a robust model estimation method based on the Cauchy distribution to tackle this issue. In addition, sparse cointegration relations are considered to realize feature selection and dimension reduction. An efficient algorithm based on the majorization-minimization (MM) method is applied to solve the proposed nonconvex problem. The performance of this algorithm is shown through numerical simulations.


Google moving into "Hardware" as the Internet of things Era takes hold

Huffington Post

Google's strategic move into selling own branded Mobile phones is another step in the merging of "Software plus Hardware" that Apple, Microsoft, Amazon and recently Facebook have realized at the making of the "Internet of Things" Era. This is the critical issue of not just providing the software and operating system but increasing the value in the devices that become the Interface to the Customer: the smart phone, the smart tablet/laptop of Microsoft Surface, the Smart Speaker of Amazon Echo and Alexa, and the Facebook Oculus Rift and Microsoft Hololens that are the new foundations of Natural Language speech recognition services and the VR Virtual Reality and AR Augmented Reality breaking now and into 2017 and onward. Google's long-term market is changing, the advertising revenue from search engines while still strong is now seeing new ways to search via speech or Virtual image recognition and virtual interaction Google has been late to realizing perhaps the shift to software hardware is where the Internet of Things may be shaping the market with the Connected Home, Connected Car and Connected Work through these devices. It's all about "market marking" beyond just the big cloud data centers and big data analytics to how to build out the edge of the cloud network with all these potentially billions of connected sensors and devices. If the Mobile phone is becoming the "remote control to this world" and platforms the "fabric of social networks and connected experiences" then Google like others is rushing to get into this space with stronger software and hardware offerings


The Many Faces of Exponential Weights in Online Learning

arXiv.org Machine Learning

A standard introduction to online learning might place Online Gradient Descent at its center and then proceed to develop generalizations and extensions like Online Mirror Descent and second-order methods. Here we explore the alternative approach of putting exponential weights (EW) first. We show that many standard methods and their regret bounds then follow as a special case by plugging in suitable surrogate losses and playing the EW posterior mean. For instance, we easily recover Online Gradient Descent by using EW with a Gaussian prior on linearized losses, and, more generally, all instances of Online Mirror Descent based on regular Bregman divergences also correspond to EW with a prior that depends on the mirror map. Furthermore, appropriate quadratic surrogate losses naturally give rise to Online Gradient Descent for strongly convex losses and to Online Newton Step. We further interpret several recent adaptive methods (iProd, Squint, and a variation of Coin Betting for experts) as a series of closely related reductions to exp-concave surrogate losses that are then handled by Exponential Weights. Finally, a benefit of our EW interpretation is that it opens up the possibility of sampling from the EW posterior distribution instead of playing the mean. As already observed by Bubeck and Eldan, this recovers the best-known rate in Online Bandit Linear Optimization.