Robust Maximum Likelihood Estimation of Sparse Vector Error Correction Model Machine Learning

In econometrics and finance, the vector error correction model (VECM) is an important time series model for cointegration analysis, which is used to estimate the long-run equilibrium variable relationships. The traditional analysis and estimation methodologies assume the underlying Gaussian distribution but, in practice, heavy-tailed data and outliers can lead to the inapplicability of these methods. In this paper, we propose a robust model estimation method based on the Cauchy distribution to tackle this issue. In addition, sparse cointegration relations are considered to realize feature selection and dimension reduction. An efficient algorithm based on the majorization-minimization (MM) method is applied to solve the proposed nonconvex problem. The performance of this algorithm is shown through numerical simulations.

Designing communication systems via iterative improvement: error correction coding with Bayes decoder and codebook optimized for source symbol error Artificial Intelligence

In error correction coding (ECC), the typical error metric is the bit error rate (BER) which measures the number of bit errors. For this metric, the positions of the bits are not relevant to the decoding, and in many noise models, not relevant to the BER either. In many applications this is unsatisfactory as typically all bits are not equal and have different significance. We look at ECC from a Bayesian perspective and introduce Bayes estimators with general loss functions to take into account the bit significance. We propose ECC schemes that optimize this error metric. As the problem is highly nonlinear, traditional ECC construction techniques are not applicable and we use iterative improvement search techniques to find good codebooks. We provide numerical experiments to show that they can be superior to classical linear block codes such as Hamming codes and decoding methods such as minimum distance decoding.

Bandit Learning Through Biased Maximum Likelihood Estimation Machine Learning

We propose BMLE, a new family of bandit algorithms, that are formulated in a general way based on the Biased Maximum Likelihood Estimation method originally appearing in the adaptive control literature. We design the cost-bias term to tackle the exploration and exploitation tradeoff for stochastic bandit problems. We provide an explicit closed form expression for the index of an arm for Bernoulli bandits, which is trivial to compute. We also provide a general recipe for extending the BMLE algorithm to other families of reward distributions. We prove that for Bernoulli bandits, the BMLE algorithm achieves a logarithmic finite-time regret bound and hence attains order-optimality. Through extensive simulations, we demonstrate that the proposed algorithms achieve regret performance comparable to the best of several state-of-the-art baseline methods, while having a significant computational advantage in comparison to other best performing methods. The generality of the proposed approach makes it possible to address more complex models, including general adaptive control of Markovian systems.

Software for Data Mining, Analytics,Data Science, and Knowledge Discovery

AITopics Original Links

Classification software: building models to separate 2 or more discrete classes using Multiple methods Decision Tree Rules Neural Bayesian SVM Genetic, Rough Sets, Fuzzy Logic and other approaches Analysis of results, ROC Social Network Analysis, Link Analysis, and Visualization software Text Analysis, Text Mining, and Information Retrieval (IR) Web Analytics and Social Media Analytics software. BI (Business Intelligence), Database and OLAP software Data Transformation, Data Cleaning, Data Cleansing Libraries, Components and Developer Kits for creating embedded data mining applications Web Content Mining, web scraping, screen scraping.