Neural Networks


Applied Deep Learning - Part 3: Autoencoders – Towards Data Science

#artificialintelligence

Welcome to Part 3 of Applied Deep Learning series. Part 1 was a hands-on introduction to Artificial Neural Networks, covering both the theory and application with a lot of code examples and visualization. In Part 2 we applied deep learning to real-world datasets, covering the 3 most commonly encountered problems as case studies: binary classification, multiclass classification and regression. Now we will start diving into specific deep learning architectures, starting with the simplest: Autoencoders. The code for this article is available here as a Jupyter notebook, feel free to download and try it out yourself.


Artificial Neural Networks: Some Misconceptions (Part 2) - DZone AI

#artificialintelligence

Let's continue learning about misconceptions around artificial neural networks. In Part 1, we discussed the most simple neural network architecture: the multi-layer perceptron. There are many different neural network architectures (far too many to mention here) and the performance of any neural network is a function of its architecture and weights. Many modern-day advances in the field of machine learning do not come from rethinking the way that perceptrons and optimization algorithms work but rather from being creative regarding how these components fit together. Below, I discuss some very interesting and creative neural network architectures that have developed over time.


March of the machines

#artificialintelligence

EXPERTS warn that "the substitution of machinery for human labour" may "render the population redundant". They worry that "the discovery of this mighty power" has come "before we knew how to employ it rightly". Such fears are expressed today by those who worry that advances in artificial intelligence (AI) could destroy millions of jobs and pose a "Terminator"-style threat to humanity. But these are in fact the words of commentators discussing mechanisation and steam power two centuries ago. Back then the controversy over the dangers posed by machines was known as the "machinery question".


Deep learning transforms smartphone microscopes into laboratory-grade devices

#artificialintelligence

Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. The technique improves the resolution and color details of smartphone images so much that they approach the quality of images from laboratory-grade microscopes. The advance could help bring high-quality medical diagnostics into resource-poor regions, where people otherwise do not have access to high-end diagnostic technologies. And the technique uses attachments that can be inexpensively produced with a 3-D printer, at less than $100 a piece, versus the thousands of dollars it would cost to buy laboratory-grade equipment that produces images of similar quality. Cameras on today's smartphones are designed to photograph people and scenery, not to produce high-resolution microscopic images.


Intel Editorial: One Simple Truth about Artificial Intelligence in Healthcare: It's Already Here

#artificialintelligence

SAN FRANCISCO--(BUSINESS WIRE)--The following is an opinion editorial provided by Navin Shenoy, executive vice president and general manager of the Data Center Group at Intel Corporation. In the wide world of big data, artificial intelligence (AI) holds transformational promise. Everything from manufacturing to transportation to retail to education will be improved through its application. But nowhere is that potential more profound than in healthcare, where every one of us has a stake. What if we could predict the next big disease epidemic, and stop it before it kills?


Top Data Science & Machine Learning GitHub Repositories in March 2018

#artificialintelligence

Not only can you follow the work happening in different domains, but you can also collaborate on multiple open source projects. All tech companies, from Google to Facebook, upload their open source project codes on GitHub so the wider coding / ML community can benefit from it. But, if you are too busy, or find following GitHub difficult, we bring you a summary of top repositories month on month. You can keep yourself updated with the latest breakthroughs and even replicate the code on your own machine! This month's list includes some awesome libraries.


What is Artificial General Intelligence? And has Kimera Systems made a breakthrough?

#artificialintelligence

The field of artificial intelligence has spawned a vast range of subset fields and terms: machine learning, neural networks, deep learning and cognitive computing, to name but a few. However here we will turn our attention to the specific term'artificial general intelligence', thanks to the Portland-based AI company Kimera Systems' (momentous) claim to have launched the world's first ever example, called Nigel. The AGI Society defines artificial general intelligence as "an emerging field aiming at the building of "thinking machines"; that is general-purpose systems with intelligence comparable to that of the human mind (and perhaps ultimately well beyond human general intelligence)". AGI would, in theory, be able to perform any intellectual feat a human can. You can now perhaps see why a claim to have launched the world's first ever AGI might be a tad ambitious, to say the least.


lifeomic/sparkflow

@machinelearnbot

This is an implementation of Tensorflow on Spark. The goal of this library is to provide a simple, understandable interface in using Tensorflow on Spark. With SparkFlow, you can easily integrate your deep learning model with a ML Spark Pipeline. Underneath, SparkFlow uses a parameter server to train the Tensorflow network in a distributed manor. Through the api, the user can specify the style of training, whether that is Hogwild or async with locking.


Generative Adversarial Networks -- A Deep Learning Architecture

#artificialintelligence

Generative Adversarial Networks (GANs)Generative Adversarial Nets, or GAN, in short, are neural nets which were first introduced by Ian Goodfellow in 2014. The algorithm has been hailed as an important milestone in Deep learning by many AI pioneers. Yann Le Cunn (father of convolutional neural networks) told that GANs is the coolest thing that has happened in deep learning within the last 20 years. Many versions of GAN have since come up like DCGAN, Sequence-GAN, LSTM-GAN, etc. GANs are neural networks composed up of two networks competing with each other. The two networks namely generator -- to generate data set and discriminator -- to validate the data set.


Neural Network based Startup Name Generator

@machinelearnbot

In this post I present a Python script that automatically generates suggestions for startup names. You feed it a text corpus with a certain theme, e.g. a Celtic text, and it then outputs similar sounding suggestions. I applied the script to "normal" texts in English, German, and French, and then experimented with corpora of Celtic songs, Pokemon names, and J.R.R. Tolkien's Black Speech, the language of Mordor. I've made a few longer lists of sampled proposals available here. You can find the code, all the text corpora I've used, and some pre-computed models in my GitHub repo: Recently, an associate and I started to found a software company, but most name ideas we came up with were already in use.