dataset


XenderLiu/Listen-Attend-and-Spell-Pytorch

#artificialintelligence

This is a PyTorch implementation of Listen, Attend and Spell (LAS) published in ICASSP 2016 (Student Paper Award). Please feel free to use/modify them, any bug report or improvement suggestion will be appreciated. This implement achieves about 34% phoneme error rate on TIMIT's testing set (using original setting in the paper without hyper parameter tuning, models are stored in checkpoint/). It's not a remarkable score but please notice that deep end2end ASR without special designed loss function such as LAS requires larger corpus to achieve outstanding performance. Result of the first sample in TIMIT testing set.


OptiML: The Nitty Gritty - DZone AI

#artificialintelligence

One click and you're done, right? That's the promise of OptiML and automated Machine Learning in general, and to some extent, the promise is kept. No longer do you have to worry about fiddly, opaque parameters of Machine Learning algorithms, or which algorithm works best. We're going to do all of that for you, trying various things in a reasonably clever way until we're fairly sure we've got something that works well for your data. Sounds really exciting, but hold your horses.


Privacy and machine learning: two unexpected allies?

#artificialintelligence

In many applications of machine learning, such as machine learning for medical diagnosis, we would like to have machine learning algorithms that do not memorize sensitive information about the training set, such as the specific medical histories of individual patients. Differential privacy is a framework for measuring the privacy guarantees provided by an algorithm. Through the lens of differential privacy, we can design machine learning algorithms that responsibly train models on private data. Our works (with Martín Abadi, Úlfar Erlingsson, Ilya Mironov, Ananth Raghunathan, Shuang Song and Kunal Talwar) on differential privacy for machine learning have made it very easy for machine learning researchers to contribute to privacy research--even without being an expert on the mathematics of differential privacy. In this blog post, we'll show you how to do it. The key is a family of algorithms called Private Aggregation of Teacher Ensembles (PATE). One of the great things about the PATE framework, besides its name, is that anyone who knows how to train a supervised ML model (such as a neural net) can now contribute to research on differential privacy for machine learning.


Machine Learning Optimization Using Genetic Algorithm

@machinelearnbot

In this course, you will learn what hyperparameters are, what Genetic Algorithm is, and what hyperparameter optimization is. In this course, you will apply Genetic Algorithm to optimize the performance of Support Vector Machines and Multilayer Perceptron Neural Networks. Hyperparameter optimization will be done on two datasets, a regression dataset for the prediction of cooling and heating loads of buildings, and a classification dataset regarding the classification of emails into spam and non-spam. The SVM and MLP will be applied on the datasets without optimization and compare their results to after their optimization. By the end of this course, you will have learnt how to code Genetic Algorithm in Python and how to optimize your Machine Learning algorithms for maximal performance.


GAN with Keras: Application to Image Deblurring – Sicara's blog

#artificialintelligence

We extract losses at two levels, at the end of the generator and at the end of the full model. The first one is a perceptual loss computed directly on the generator's outputs. This first loss ensures the GAN model is oriented towards a deblurring task. It compares the outputs of the first convolutions of VGG. The second loss is the Wasserstein loss performed on the outputs of the whole model.


IBM Researchers Explain Machine Learning Models By Exploring What Isn't There

#artificialintelligence

In "The Adventure of the Silver Blaze," Sherlock Holmes famously solved a case not by discovering a clue - but by noting its absence. In that case, it was a dog that didn't bark, and that lack of barking helped identify the culprit. The fact that humans are able to make deductions and learn from something that's missing isn't something that's yet been widely applied to machine learning, but that's something that a team of researchers a IBM want to change. In a paper published earlier this year, the team outlined a means of using missing results to get a better understanding of how machine learning models work. "One of the pitfalls of deep learning is that it's more or less black box," explained Amit Dhurandhar, one of the members of the research team.


Feeding Future Generations With AI - DZone AI

#artificialintelligence

As world population grows, crop production needs to keep up. Can we use Artificial Intelligence for Agriculture? Right now, AI is being (and will be) used for so many things. If you follow journals, blogs, publications, and more, you can see people solving problems from speech recognition to breast cancer detection and much more. So, why not try to solve a problem for the agricultural space.


Approaching Machine learning problem – Bhushan Shewale – Medium

#artificialintelligence

An average data scientist deal with lots of data daily, around 60–70% time spend on data cleaning, data munging and convert the data into suitable form so that we can apply machine learning model on that data. This blog focuses on applying machine learning models, including the preprocessing steps. Many Data science enthusiast ask me how to solve machine learning problem? Before applying the machine learning models, the data must be converted to a tabular form. There is two types of data Numerical variable and Categorical variable.


How is artificial intelligence changing science?

#artificialintelligence

Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. In a Q&A timed with the first Intel AI DevCon event, the Intel vice president and architecture general manager for its Artificial Intelligence Products Group discussed his role at the intersection of science--computing's most demanding customer--and AI, how scientists should approach AI and why it is the most dynamic and exciting opportunity he has faced. How is AI changing science? Scientific exploration is going through a transition that, in the last 100 years, might only be compared to what happened in the '50s and '60s, moving to data and large data systems. In the '60s, the amount of data being gathered was so large that the frontrunners were not those with the finest instruments, but rather those able to analyze the data that was gathered in any scientific area, whether it was climate, seismology, biology, pharmaceuticals, the exploration of new medicine, and so on.


Why Digital Transformation Needs To Maintain A Human Touch

#artificialintelligence

Digital Transformation continues to gain velocity. But even as technology takes over many tasks, it's important that a Human touch is retained. This thought-provoking article by Bryan Kramer s another in our "Great Articles You may have missed" series. A few years ago, experts were trumpeting that the future is mobile, and they weren't wrong. Some of the world's most successful new apps and business models are mobile-based -- just look at Uber, Instagram, and Snapchat.