kaggle


AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks

MIT Technology Review

A new competition heralds what is likely to become the future of cybersecurity and cyberwarfare, with offensive and defensive AI algorithms doing battle. "It's a brilliant idea to catalyze research into both fooling deep neural networks and designing deep neural networks that cannot be fooled," says Jeff Clune, an assistant professor at the University of Wyoming who studies the limits of machine learning. Machine learning, and deep learning in particular, is rapidly becoming an indispensable tool in many industries. "Adversarial machine learning is more difficult to study than conventional machine learning--it's hard to tell if your attack is strong or if your defense is actually weak," says Ian Goodfellow, a researcher at Google Brain, a division of Google dedicated to researching and applying machine learning, who organized the contest.


Implementing MaLSTM on Kaggle's Quora Question Pairs competition

#artificialintelligence

Few months ago I came across a very nice article called Siamese Recurrent Architectures for Learning Sentence Similarity which offers a pretty straightforward approach at the common problem of sentence similarity. Siamese network seem to perform good on similarity tasks and have been used for tasks like sentence semantic similarity, recognizing forged signatures and many more. Word embedding is a modern way to represent words in deep learning models, more about it can be found in this nice blog post. Inputs to the network are zero padded sequences of word indices, these inputs are vectors of fixed length, where the first zeros are being ignored and the non zeros are indices that uniquely identify words.


The Best of Big Data: New Articles Published This Month (June 2017)

@machinelearnbot

Google continues its efforts to make it easy to implement its latest big data research. It will show you that training Generative Adversarial Networks is hard because you have to balance the generator part and the discriminator part. Audit is still one of the most difficult parts of Big Data algorithms. This articles makes good points on a research paper that uses Generative Adversarial Networks to generate new anti-cancer molecules.


The Kaggle data science community is competing to improve airport security with AI

#artificialintelligence

With the help of the Kaggle data science community, the Department of Homeland Security (DHS) is hosting an online competition to build machine learning-powered tools that can augment agents, ideally making the entire system simultaneously more accurate and efficient. Kaggle, acquired by Google earlier this year, regularly hosts online competitions where data scientists compete for money by developing novel approaches to complex machine learning problems. The TSA is making its data set of images available to competitors so they can train on images of people carrying weapons. To mitigate this, the TSA put special effort into creating the data set of images that will ultimately be used to train the detectors.


?ncid=rss&utm_source=dlvr.it&utm_medium=twitter

#artificialintelligence

With the help of the Kaggle data science community, the Department of Homeland Security (DHS) is hosting an online competition to build machine learning-powered tools that can augment agents, ideally making the entire system simultaneously more accurate and efficient. Kaggle, acquired by Google earlier this year, regularly hosts online competitions where data scientists compete for money by developing novel approaches to complex machine learning problems. The TSA is making its data set of images available to competitors so they can train on images of people carrying weapons. Thankfully, Google, Facebook and others are heavily investing in lighter versions of machine learning frameworks, optimized to run locally, at the edge (without internet).


The tips and tricks I used to succeed on Kaggle

#artificialintelligence

I learned machine learning through competing in Kaggle competitions. In my first ever Kaggle competition, the Photo Quality Prediction competition, I ended up in 50th place, and had no idea what the top competitors had done differently from me. What changed the result from the Photo Quality competition to the Algorithmic Trading competition was learning and persistence. Because feature engineering is very problem-specific domain knowledge helps a lot.


Machine Learning with R: An Irresponsibly Fast Tutorial

#artificialintelligence

Now, let's compare the training set to the test set: The big difference between the training set and the test set is that the training set is labeled, but the test set is unlabeled. On Kaggle, your job is to make predictions on the unlabeled test set, and Kaggle scores you based on the percentage of passengers you correctly label. Training the model uses a pretty simple command in caret, but it's important to understand each piece of the syntax. Typically, you randomly split the training data into 5 equally sized pieces called "folds" (so each piece of the data contains 20% of the training data).


Kaggle Master (IT Best Kept Secret Is Optimization)

#artificialintelligence

I entered that competition to learn about natural language processing (NLP), a domain entirely unknown to me at the start of the competition. The test set is split into a public test set, and a private test set. Then the test set error of the final chosen model will underestimate the true test error, sometimes substantially. Kaggle public test set plays the role of the validation set, while the Kaggle private test set plays the role of the test set.


Stacking models for improved predictions: A case study for housing prices

@machinelearnbot

I will use three different regression methods to create predictions (XGBoost, Neural Networks, and Support Vector Regression) and stack them up to produce a final prediction. I trained three level 1 models: XGBoost, neural network, support vector regression. Graphically, once can see that the circled data point is a prediction which is worse in XGBoost (which is the best model when trained on all the training data), but neural network and support vector regression does better for that specific point. For example, below are the RMSE values on the holdout data (rmse1: XGBoost, rmse2: Neural Network, rmse3: Support Vector Regression), for 20 different random 10-folds created.


Google's Kaggle now has 1 million developers signed up for AI competitions

#artificialintelligence

Kaggle, a website that hosts public competitions on machine learning tasks, announced Tuesday that it now has over 1 million users a little more than 7 years after it launched. It's a sign of accelerating interest in the artificial intelligence and machine learning field, since Kaggle competitions give data scientists a way to test techniques on real-world tasks with the chance to compete for prize money. Since then, it has become a key part of the data science ecosystem. Kaggle has seen the rise of several cutting edge machine learning techniques, including decision trees and deep neural networks.