Results


How to Solve the New $1 Million Kaggle Problem - Home Value Estimates

@machinelearnbot

More specifically, I provide here high-level advice, rather than about selecting specific statistical models or algorithms, though I also discuss algorithm selection in the last section. If this is the case, an easy improvement consists of increasing value differences between adjacent homes, by boosting the importance of lot area and square footage in locations that have very homogeneous Zillow value estimates. Then for each individual home, compute an estimate based on the bin average, and other metrics such as recent sales price for neighboring homes, trend indicator for the bin in question (using time series analysis), and home features such as school rating, square footage, number of bedrooms, 2- or 3-car garage, lot area, view or not, fireplace(s), and when the home was built. With just a few (properly binned) features, a simple predictive algorithm such as HDT (Hidden Decision Trees - a combination of multiple decision trees and special regression) can work well, for homes in zipcodes (or buckets of zipcodes) with 200 homes with recent historical sales price.


Model evaluation, model selection, and algorithm selection in machine learning

#artificialintelligence

In contrast to k-nearest neighbors, a simple example of a parametric method would be logistic regression, a generalized linear model with a fixed number of model parameters: a weight coefficient for each feature variable in the dataset plus a bias (or intercept) unit. While the learning algorithm optimizes an objective function on the training set (with exception to lazy learners), hyperparameter optimization is yet another task on top of it; here, we typically want to optimize a performance metric such as classification accuracy or the area under a Receiver Operating Characteristic curve. Thinking back of our discussion about learning curves and pessimistic biases in Part II, we noted that a machine learning algorithm often benefits from more labeled data; the smaller the dataset, the higher the pessimistic bias and the variance -- the sensitivity of our model towards the way we partition the data. We start by splitting our dataset into three parts, a training set for model fitting, a validation set for model selection, and a test set for the final evaluation of the selected model.


Lift Analysis – A Data Scientist's Secret Weapon

#artificialintelligence

Whenever I read articles about data science I feel like there is some important aspect missing: evaluating the performance and quality of a machine learning model. Consequently, the first post on this blog will deal with a pretty useful evaluation technique: lift analysis. When evaluating machine learning models there is a plethora of possible metrics to assess performance.


SESSION 1 PAPER 1 SOME METHODS OF ARTIFICIAL INTELLIGENCE AND HEURISTIC PROGRAMMING

Classics (Collection 2)

Marvin Lee Minsky was born in New York on 9th August, 1927. He received his B.A from Harvard in 1950 and Ph.D in Mathematics from Princeton in 1954. For the next three years he was a member of the Harvard University Society of Fellows, and in 1957-58 was staff member of the M.I.T. Lincoln Laboratories. At present he is Assistant Professor of Mathematics at M.I.T. where he is giving a course in Automata and Artificial Intelligence and is also staff member of the Research Laboratory of Electronics. Particular attention is given to processes involving pattern recognition, learning, planning ahead, and the use of analogies or?models!.


Report 84-38.pdf

Classics (Collection 2)

From attributes 8 3 Implementation 8 3.1 Overview of Meta-Rulegen 3.2 Algorithm 10 3.2.1 Approach from object rule 11 3.2.2 Machine learning can be used to formulate new meta-level knowledge. A small MYCIN -like medical diagnosis system was constructed as a starting point. Two heuristic methods are used in a program called Meta-Rulegen to form metarules from the knowledge base in the diagnosis system. In a preliminary study, 63 metarules were formed automatically and, by judiciously selecting a set of metarules, the efficiency of the diagnosis system can be improved significantly without degrading the quality of advice. This study suggests that metarules can be learned automatically to improve the efficiency of rule-based systems. 1 Introduction The value of meta-level knowledge for guiding the invocation, construction, and explanatioi. of object-level rules in an expert system has been demonstrated by Davis [2]. In this paper we explore the use of machine learning methods for ...