Collaborating Authors


Online Learning of Power Transmission Dynamics Machine Learning

We consider the problem of reconstructing the dynamic state matrix of transmission power grids from time-stamped PMU measurements in the regime of ambient fluctuations. Using a maximum likelihood based approach, we construct a family of convex estimators that adapt to the structure of the problem depending on the available prior information. The proposed method is fully data-driven and does not assume any knowledge of system parameters. It can be implemented in near real-time and requires a small amount of data. Our learning algorithms can be used for model validation and calibration, and can also be applied to related problems of system stability, detection of forced oscillations, generation re-dispatch, as well as to the estimation of the system state.

Power Plant Performance Modeling with Concept Drift Machine Learning

Power plant is a complex and nonstationary system for which the traditional machine learning modeling approaches fall short of expectations. The ensemble-based online learning methods provide an effective way to continuously learn from the dynamic environment and autonomously update models to respond to environmental changes. This paper proposes such an online ensemble regression approach to model power plant performance, which is critically important for operation optimization. The experimental results on both simulated and real data show that the proposed method can achieve performance with less than 1% mean average percentage error, which meets the general expectations in field operations.

Decentralized Online Learning with Kernels Machine Learning

We consider multi-agent stochastic optimization problems over reproducing kernel Hilbert spaces (RKHS). In this setting, a network of interconnected agents aims to learn decision functions, i.e., nonlinear statistical models, that are optimal in terms of a global convex functional that aggregates data across the network, with only access to locally and sequentially observed samples. We propose solving this problem by allowing each agent to learn a local regression function while enforcing consensus constraints. We use a penalized variant of functional stochastic gradient descent operating simultaneously with low-dimensional subspace projections. These subspaces are constructed greedily by applying orthogonal matching pursuit to the sequence of kernel dictionaries and weights. By tuning the projection-induced bias, we propose an algorithm that allows for each individual agent to learn, based upon its locally observed data stream and message passing with its neighbors only, a regression function that is close to the globally optimal regression function. That is, we establish that with constant step-size selections agents' functions converge to a neighborhood of the globally optimal one while satisfying the consensus constraints as the penalty parameter is increased. Moreover, the complexity of the learned regression functions is guaranteed to remain finite. On both multi-class kernel logistic regression and multi-class kernel support vector classification with data generated from class-dependent Gaussian mixture models, we observe stable function estimation and state of the art performance for distributed online multi-class classification. Experiments on the Brodatz textures further substantiate the empirical validity of this approach.

Fast and Strong Convergence of Online Learning Algorithms Machine Learning

In this paper, we study the online learning algorithm without explicit regularization terms. This algorithm is essentially a stochastic gradient descent scheme in a reproducing kernel Hilbert space (RKHS). The polynomially decaying step size in each iteration can play a role of regularization to ensure the generalization ability of online learning algorithm. We develop a novel capacity dependent analysis on the performance of the last iterate of online learning algorithm. The contribution of this paper is two-fold. First, our nice analysis can lead to the convergence rate in the standard mean square distance which is the best so far. Second, we establish, for the first time, the strong convergence of the last iterate with polynomially decaying step sizes in the RKHS norm. We demonstrate that the theoretical analysis established in this paper fully exploits the fine structure of the underlying RKHS, and thus can lead to sharp error estimates of online learning algorithm.

Strategyproof Peer Selection using Randomization, Partitioning, and Apportionment Artificial Intelligence

Peer review, evaluation, and selection is a fundamental aspect of modern science. Funding bodies the world over employ experts to review and select the best proposals of those submitted for funding. The problem of peer selection, however, is much more general: a professional society may want to give a subset of its members awards based on the opinions of all members; an instructor for a MOOC or online course may want to crowdsource grading; or a marketing company may select ideas from group brainstorming sessions based on peer evaluation. We make three fundamental contributions to the study of procedures or mechanisms for peer selection, a specific type of group decision-making problem, studied in computer science, economics, and political science. First, we propose a novel mechanism that is strategyproof, i.e., agents cannot benefit by reporting insincere valuations. Second, we demonstrate the effectiveness of our mechanism by a comprehensive simulation-based comparison with a suite of mechanisms found in the literature. Finally, our mechanism employs a randomized rounding technique that is of independent interest, as it solves the apportionment problem that arises in various settings where discrete resources such as parliamentary representation slots need to be divided proportionally.

Google X's online course teaches you to build flying cars

Daily Mail - Science & tech

You can now learn how to build a flying car in just four months thanks to a new $400 (£295) online course. Online education provider Udacity, which is owned by Google X and Kitty Hawk founder Sebastian Thrun, has announced two new'nanodegrees'. One course will teach users the basics of driverless car engineering, while another will show students how to make systems for autonomous flying vehicles. You can now learn how to build a flying car in just four months thanks to a new $400 (£295) online course. Education provider Udacity has announced two new'nanodegrees' teaching users to make driverless or flying vehicles, such as the AeroMobil car pictured here Students will learn the basics of autonomous flight, including vehicle state planning and estimation, as well as motion planning.

Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU Machine Learning

The online problem of computing the top eigenvector is fundamental to machine learning. In both adversarial and stochastic settings, previous results (such as matrix multiplicative weight update, follow the regularized leader, follow the compressed leader, block power method) either achieve optimal regret but run slow, or run fast at the expense of loosing a $\sqrt{d}$ factor in total regret where $d$ is the matrix dimension. We propose a $\textit{follow-the-compressed-leader (FTCL)}$ framework which achieves optimal regret without sacrificing the running time. Our idea is to "compress" the matrix strategy to dimension 3 in the adversarial setting, or dimension 1 in the stochastic setting. These respectively resolve two open questions regarding the design of optimal and efficient algorithms for the online eigenvector problem.

Learning Path: Python: Machine and Deep Learning with Python


Do you want to explore the various arenas of machine learning and deep learning by creating insightful and interesting projects? If yes, then this Learning Path is ideal for you! Packt's Video Learning Paths are a series of individual video products put together in a logical and stepwise manner such that each video builds on the skills learned in the video before it. Machine learning and deep learning gives you unimaginably powerful insights into data. Both of these fields are increasingly pervasive in the modern data-driven world.

The Whys and Hows of Becoming a Robotics Engineer


In 2015, a poll of 200 senior corporate executives conducted by the National Robotics Education Foundation identified robotics as a major source of jobs for the United States. Indeed, some 81% of respondents agreed that robotics was the top area of job growth for the nation. Not that this should come as a surprise: as the demand for smart factories and automation increases, so does the need for robots. According to Nearshore Americas, smart factories are expected to add $500 billion to the global economy in 2017. In a survey conducted by technology consulting firm Capgemini, more than half of the respondents claimed to have invested $100 million or more into smart factory initiatives over the last five years.