network


Evaluating Data Science Projects: A Case Study Critique

@machinelearnbot

By convention, the rare class is usually positive, so this means the True Positive (TP) rate is 0.78, and the False Negative rate (1 – True Positive rate) is 0.22. The Non-Large Loss recognition rate is 0.79, so the True Negative rate is 0.79 and the False Positive (FP) rate is 0.21. They don't report a False Positive rate (or True Negative rate, from which we could have calculated it). This result means that, using their Neural network, they must process 28 uninteresting Non-Large Loss customers (false alarms) for each Large-Loss customer they want.


AI Startup Invents Trick For Robots To More Efficiently Teach Themselves Complex Tasks

#artificialintelligence

The trick -- the company is calling it "concept networks" -- massively increases the efficiency of reinforcement learning. In a recently published paper, Bonsai's AI researchers describe how concept networks function by breaking out an objective into distinct problem areas. To teach a robot how to pick up and stack a block, for example, Bonsai has broken the task out into five concepts -- reach, orient, grasp, move and stack. DeepMind's paper describing its reinforcement learning approach takes on a similar grasping and stacking task with a robotic arm, but Bonsai's concept networks makes for a hugely more efficient system.


pollenating_insects_3 Description - RAMP

#artificialintelligence

In this RAMP, we propose a dataset of pictures of insects from different species gathered from the SPIPOLL project and labeled by specialists. The dataset contains a set of 72939 labeled pictures of insects coming from 403 different insect species. For each submission, you will have to provide an image preprocessor (to standardize, resize, crop, augment images) and batch classifier, which will fit a training set and predict the classes (species) on a test set. You can also rotate the images or apply other data augmentation tricks (google "convolutional nets data augmentation").


AI can reconstruct a picture of your face into these weird and wonderful 3D images

Mashable

Try it yourself here, in the online demo of their paper entitled "Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression." As the current systems work, multiple facial images from various angles are fed to the system which then needs to address several challenges, such as "establishing dense correspondences across large facial poses, expressions, and non-uniform illumination," researchers said. To overcome these challenges, Jackson and others trained a neural network -- called Convolutional Neural Network (CNN) -- on a big fat dataset of 2D pictures and 3D facial models. The result is that now the CNN can reconstruct the whole 3D facial geometry from a single, unseen 2D image.


[Discussion] Solving a Rubik's Cube using a Simple ConvNet • r/MachineLearning

@machinelearnbot

Plenty of efficient algorithms exist to solve a rubik's cube. I was curious to find out if a neural net could learn how to solve a cube in the most "efficient" way, by solving the cube in less than 20 moves, i.e god's number. I used a 2 layer neural net: 1 convnet layer and 1 feedforward layer. For the training set, I generated games at random during training for games of 10 moves or less from solved with the corresponding solutions as label.


How artificial intelligence makes datacenters smarter

@machinelearnbot

Vigilent, a company that uses IoT, machine learning and prescriptive analytics in mission-critical environments, reduces datacenter cooling capacity by employing real-time monitoring and machine language software to match cooling needs with the exact cooling capacity. Reduces Opex: Off-the-shelf smart management and monitoring solutions can be embedded with AI systems to reduce and control datacenter operating expenses. Google reduced the overall datacenter power utilization by 15 percent by using a custom AI smart management and monitoring solution that employs machine learning to control about 120 datacenter variables from fan speeds to windows. Another company called Wave2Wave has developed a rack-mounted robot called ROME (Robotic Optical Switch for Datacenters) for making physical optical connections in a few seconds.


Data Scientist versus Data Architect

@machinelearnbot

It's not only difficult to maintain, but by using hash tables, you're ineffectively re-writing code that achieves what a database platform already does, but much better, as the database platform's binary code is optimised for exactly this sort of operation, whereas Python isn't. By working in this way, you're also guilty of pulling all of the data from every data set you're using from a server (lots of IO) across a network (lots of bandwidth usage) to do something on a low spec laptop rather than a high spec server (lots of time penalties). You're effectively moving your own learning pain as a cost onto your employer because you're insisting on using a tool you're familiar with (Python), rather than choosing the most suitable tool for the job (SQL), which I would argue every data scientist should have a half decent understanding of in order to achieve precisely these goals. I'd certainly not employ someone who claimed to be an operational data scientist if they could not write basic SQL (3-4 way joins, filtering, aggregates).


At a Glance - Badnets - Disruption Hub

#artificialintelligence

Often, the complexity and cost of training neural networks leads companies to outsource development to larger tech firms. Now that facial recognition systems are becoming more common, imagine if the software failed to identify that person. Fortunately, by working out exactly how to confuse AI, researchers are getting closer to finding solutions. Instead of discouraging businesses from taking an open approach to software development, badnets should motivate them to enter more transparent relationships.


Deep Learning: Convolutional Neural Networks in Python

@machinelearnbot

You've already written deep neural networks in Theano and TensorFlow, and you know how to run code using the GPU. This course is all about how to use deep learning for computer vision using convolutional neural networks. But we will show that convolutional neural networks, or CNNs, are capable of handling the challenge! We will then test their performance and show how convolutional neural networks written in both Theano and TensorFlow can outperform the accuracy of a plain neural network on the StreetView House Number dataset.


Deep Learning A-Z : Hands-On Artificial Neural Networks

@machinelearnbot

That's what we mean when we say that in this course we teach you the most cutting edge Deep Learning models and techniques. Thanks to Keras we can create powerful and complex Deep Learning models with only a few lines of code. By applying your Deep Learning model the bank may significantly reduce customer churn. We are extremely excited to include these cutting-edge deep learning methods in our course!