Results


How to Solve the New $1 Million Kaggle Problem - Home Value Estimates

@machinelearnbot

More specifically, I provide here high-level advice, rather than about selecting specific statistical models or algorithms, though I also discuss algorithm selection in the last section. If this is the case, an easy improvement consists of increasing value differences between adjacent homes, by boosting the importance of lot area and square footage in locations that have very homogeneous Zillow value estimates. Then for each individual home, compute an estimate based on the bin average, and other metrics such as recent sales price for neighboring homes, trend indicator for the bin in question (using time series analysis), and home features such as school rating, square footage, number of bedrooms, 2- or 3-car garage, lot area, view or not, fireplace(s), and when the home was built. With just a few (properly binned) features, a simple predictive algorithm such as HDT (Hidden Decision Trees - a combination of multiple decision trees and special regression) can work well, for homes in zipcodes (or buckets of zipcodes) with 200 homes with recent historical sales price.


Understanding Machine Learning 04-08

#artificialintelligence

Machine learning is "[…] the branch of AI that explores ways to get computers to improve their performance based on experience". Source: Berkeley Let's break that down to set some foundations on which to build our machine learning knowledge. Branch of AI: Artificial intelligence is the study and development by which a computer and its systems are given the ability to successfully accomplish tasks that would typically require a human's intelligent behavior. Machine learning is a part of that process. It's the technology and process by which we train the computer to accomplish the said task.


Compare NVIDIA Pascal GPUs and Google TPU

#artificialintelligence

The recent TPU paper by Google draws a clear conclusion – without accelerated computing, the scale-out of AI is simply not practical. Today's economy runs in the world's data centers, and data centers are changing dramatically. Not so long ago, they served up web pages, advertising and video content. Now, they recognize voices, detect images in video streams and connect us with information we need exactly when we need it. Increasingly, those capabilities are enabled by a form of artificial intelligence called deep learning.


Spark with HDInsight - Enterprise Ready Machine Learning and Interactive Data Analysis at Scale - Silicon Valley, CA

#artificialintelligence

In particular, it is particularly amenable to machine learning and interactive data workloads, and can provide an order of magnitude greater performance than traditional Hadoop data processing tools. In this course, we will provide a deep-dive into Spark as a framework, understand it's design, how to optimally utilize it's design, and how to develop effective machine learning applications with Spark on HDInsight. The course covers the fundamentals of Spark, it's core APIs and design, relational data processing with Spark SQL, the fundamentals of Spark job execution, performance tuning, tracking and debugging. Users will get hands-on experience with processing streaming data with Spark streaming, training machine learning algorithms with Spark ML and R Server on Spark, as well as HDInsight configuration and platform specific considerations such as remote developing and access with Livy and IntelliJ, secure Spark, multi-user notebooks with Zeppelin, and virtual networking with other HDInsight clusters.


Brain zapping helps US Navy Seals learn faster

Engadget

The Navy wants soldiers who can concentrate better and learn faster, and it's looking at a controversial piece of tech to do that: transcranial electrical stimulation. It has been testing a passive brain-stimulating device from Halo Neuroscience with "a small group of volunteers" from Seal Team Six, the group that killed Osama Bin Laden, and other units, according to Military.com. "Early results show promising signs," said spokesman Capt. The $749 Halo Neuroscience headset (below) looks a lot like regular headphones, and does actually play music. However, it also has silicon spikes on the band called "neuroprimers" that contact a wearer's head.


How To Build a Simple Spam-Detecting Machine Learning Classifier

#artificialintelligence

In this tutorial we will begin by laying out a problem and then proceed to show a simple solution to it using a Machine Learning technique called a Naive Bayes Classifier. This tutorial requires a little bit of programming and statistics experience, but no prior Machine Learning experience is required. You work as a software engineer at a company which provides email services to millions of people. Lately, spam has a been a major problem and has caused your customers to leave. Your current spam filter only filters out emails that have been previously marked as spam by your customers.


Don't fall for the AI hype: Here are the ingredients you need to build an actual useful thing

#artificialintelligence

Artificial intelligence these days is sold as if it were a magic trick. Data is fed into a neural net – or black box – as a stream of jumbled numbers, and voilà! It comes out the other side completely transformed, like a rabbit pulled from a hat. That's possible in a lab, or even on a personal dev machine, with carefully cleaned and tuned data. However, it is takes a lot, an awful lot, of effort to scale machine-learning algorithms up to something resembling a multiuser service – something useful, in other words.


Bias-Variance Tradeoff in Machine Learning

#artificialintelligence

In this post, we will develop an intuitive sense for an important concept in Machine Learning called the Bias-Variance Tradeoff. Before we dive into the subject, allow me to go off on a tangent about human learning for a little bit. Practice alone does not make you better at a skill. We all know people who practice very hard but never seem to accomplish much. The reason is that they do not direct their effort appropriately.


Having Fun With Machine Learning With Node.js and Cloud 66 - DZone Big Data

#artificialintelligence

Machine learning is the art of using computer algorithms to learn from experiences and use those experiences for future predictions. Tom Mitchell gave a really simple definition of machine learning. A computer program is said to learn from experience (E) with respect to some task (T) and some performance measure (P), if its performance on (T), as measured by (P), improves with experience (E). This definition dazzled me a bit, too. In human language, if you want your program to predict, for example, buy patterns at a busy grocery store (task T), you can run it through a Machine Learning algorithm with data about past buying patterns (experience E) and, if it has successfully learned, it will then do better at predicting future buy patterns (performance measure P).


Bias-Variance Tradeoff in Machine Learning

#artificialintelligence

In this post, we will develop an intuitive sense for an important concept in Machine Learning called the Bias-Variance Tradeoff. Before we dive into the subject, allow me to go off on a tangent about human learning for a little bit. Practice alone does not make you better at a skill. We all know people who practice very hard but never seem to accomplish much. The reason is that they do not direct their effort appropriately.