Machine Learning - Ensemble Methods

#artificialintelligence

Essay on the Application of Analysis to the Probability of Majority Decisions https://en.wikipedia.org/wiki/Condorcet%27s_jury_theorem 4. Condorcet's Jury Theorm Principle: If we assume each voter probability of making a good decision is better than random (i.e., 0.50), then the probability of a good decision increases with each voter added. He showed the converse was also true. If we assume each voter probability of making a good decision is less than random (i.e., 0.50), then the probability of a good decision decreases with each voter added. Example Even if the probability is slightly more than random (e.g., 0.51), the principle holds true.


Explaining the Almon Distributed Lag Model

#artificialintelligence

That post drew quite a number of email requests for more information about the Almon estimator, and how it fits into the overall scheme of things. In addition, Almon's approach to modelling distributed lags has been used very effectively more recently in the estimation of the so-called MIDAS model. The MIDAS model (developed by Eric Ghysels and his colleagues – e.g., see Ghysels et al., 2004) is designed to handle regression analysis using data with different observation frequencies. The acronym, "MIDAS", stands for "Mixed-Data Sampling". The MIDAS model can be implemented in R, for instance (e.g., see here), as well as in EViews.


How to tell which iPad model you have

PCWorld

Updated September 6, 2017 to reflect the latest iPad models. You might think you know which iPad you have. But when you need to know exactly which model you have, or better yet, which generation, it can get a little trickier. You don't have to be an Apple Store Genius to figure it out, though you do have to know where to look... and what to look for. In addition to the marketing names that we all know so well, all iPads have a model number.


Plus-sized model 'cried when asked to be cover model'

BBC News

US plus-size model Tess Holliday says she cried when asked to be on the front cover of Cosmopolitan's UK magazine.


Competitive Machine Learning: Best Theoretical Prediction vs Optimization

arXiv.org Machine Learning

Machine learning is often used in competitive scenarios: Participants learn and fit static models, and those models compete in a shared platform. The common assumption is that in order to win a competition one has to have the best predictive model, i.e., the model with the smallest out-sample error. Is that necessarily true? Does the best theoretical predictive model for a target always yield the best reward in a competition? If not, can one take the best model and purposefully change it into a theoretically inferior model which in practice results in a higher competitive edge? How does that modification look like? And finally, if all participants modify their prediction models towards the best practical performance, who benefits the most? players with inferior models, or those with theoretical superiority? The main theme of this paper is to raise these important questions and propose a theoretical model to answer them. We consider a study case where two linear predictive models compete over a shared target. The model with the closest estimate gets the whole reward, which is equal to the absolute value of the target. We characterize the reward function of each model, and using a basic game theoretic approach, demonstrate that the inferior competitor can significantly improve his performance by choosing optimal model coefficients that are different from the best theoretical prediction. This is a preliminary study that emphasizes the fact that in many applications where predictive machine learning is at the service of competition, much can be gained from practical (back-testing) optimization of the model compared to static prediction improvement.