neural network


Evading Machine Learning Malware Classifiers

#artificialintelligence

This was a white box competition; meaning I had full access to all model parameters and source code. Therefore, the first thing to do was crack open the models and see what was going on under the hood. The first model is a neural network trained on the raw bytes of Windows executables. MalConv is implemented in PyTorch, and if you're already familiar with neural networks the code is relatively simple and straight forward: Files are passed to MalConv as a sequence of integers representing the bytes of the file (0–255). The sequence of vectors can then be processed by additional neural network layers.


Deep learning enables real-time imaging around corners: Detailed, fast imaging of hidden objects could help self-driving cars detect hazards

#artificialintelligence

"Compared to other approaches, our non-line-of-sight imaging system provides uniquely high resolutions and imaging speeds," said research team leader Christopher A. Metzler from Stanford University and Rice University. "These attributes enable applications that wouldn't otherwise be possible, such as reading the license plate of a hidden car as it is driving or reading a badge worn by someone walking on the other side of a corner." In Optica, The Optical Society's journal for high-impact research, Metzler and colleagues from Princeton University, Southern Methodist University, and Rice University report that the new system can distinguish submillimeter details of a hidden object from 1 meter away. The system is designed to image small objects at very high resolutions but can be combined with other imaging systems that produce low-resolution room-sized reconstructions. "Non-line-of-sight imaging has important applications in medical imaging, navigation, robotics and defense," said co-author Felix Heide from Princeton University.


Deep learning vs. machine learning: Understand the differences

#artificialintelligence

Machine learning and deep learning are both forms of artificial intelligence. You can also say, correctly, that deep learning is a specific kind of machine learning. Both machine learning and deep learning start with training and test data and a model and go through an optimization process to find the weights that make the model best fit the data. Both can handle numeric (regression) and non-numeric (classification) problems, although there are several application areas, such as object recognition and language translation, where deep learning models tend to produce better fits than machine learning models. Machine learning algorithms are often divided into supervised (the training data are tagged with the answers) and unsupervised (any labels that may exist are not shown to the training algorithm).


Neural Architecture and AutoML Technology Analytics Insight

#artificialintelligence

Deep learning offers the promise of bypassing the procedure of manual feature engineering by learning representations in conjunction with statistical models in an end-to-end fashion. In any case, neural network architectures themselves are ordinarily designed by specialists in a painstaking, ad hoc fashion. Neural architecture search (NAS) has been touted as the way ahead for lightening this agony via automatically identifying architectures that are better than hand-planned ones. Machine learning has given some huge achievements in diverse fields as of late. Areas like financial services, healthcare, retail, transportation, and more have been utilizing machine learning frameworks somehow, and the outcomes have been promising.


Efficient Computing for Deep Learning, Robotics, and AI (Vivienne Sze) MIT Deep Learning Series

#artificialintelligence

OUTLINE: 0:00 - Introduction 0:43 - Talk overview 1:18 - Compute for deep learning 5:48 - Power consumption for deep learning, robotics, and AI 9:23 - Deep learning in the context of resource use 12:29 - Deep learning basics 20:28 - Hardware acceleration for deep learning 57:54 - Looking beyond the DNN accelerator for acceleration 1:03:45 - Beyond deep neural networks CONNECT: - If you enjoyed this video, please subscribe to this channel.


PyTorch 1.4 adds experimental Java bindings and more

#artificialintelligence

PyTorch 1.4 has been released, and the PyTorch domain libraries have been updated along with it. The popular open source machine learning framework has some experimental features on board, so let's take a closer look. PyTorch Mobile was first introduced in PyTorch 1.3 as an experimental release. It should provide an "end-to-end workflow from Python to deployment on iOS and Android," as the website states. In the latest release, PyTorch Mobile is still experimental but has received additional features.


Google DeepMind's 'Sideways' takes a page from computer architecture ZDNet

#artificialintelligence

Increasingly, machine learning forms of artificial intelligence are contending with the limits of computing hardware, and it's causing scientists to rethink how they design neural networks. That was clear in last week's research offering from Google, called Reformer, which aimed to stuff a natural language program into a single graphics processing chip instead of eight. And this week brought another offering from Google focused on efficiency, something called Sideways. With this invention, scientists have borrowed a page from computer architecture, creating a pipeline that gets more work done at every moment. Most machine learning neural nets during their training phase use a forward pass, a transmission of a signal through layers of the network, followed by backpropagation, a backward pass through the same layers, only in reverse, to gradually modify the weights of a neural network till they're just right.


Building a Lie Detector for Images

#artificialintelligence

The Internet is full of fun fake images -- from flying sharks and cows on cars to a dizzying variety of celebrity mashups. Hyperrealistic image and video fakes generated by convolutional neural networks (CNNs) however are no laughing matter -- in fact they can be downright dangerous. Deepfake porn reared its ugly head in 2018, fake political speeches by world leaders have cast doubt on news sources, and during the recent Australian bushfires manipulated images mislead people regarding the location and size of fires. Fake images and videos are giving AI a black eye -- but how can the machine learning community fight back? A new paper from UC Berkeley and Adobe researchers declares war on fake images.


Human insight remains essential to beat the bias of algorithms

#artificialintelligence

When it comes to bias and artificial intelligence, there is a common belief that algorithms are only as good as the numbers plugged into them. But the focus on algorithmic bias being concentrated entirely on data has meant we have ignored two aspects of this problem: the deep limitations of existing algorithms and, more importantly, the role of human problem solvers. Powerful as they may be, most of our algorithms only mine correlational relationships without understanding anything about them. My research has found that massive data sets on jobs, education and loans contain more spurious correlations than meaningful causal relationships. It is ludicrous to assume these algorithms will solve problems that we do not understand.


Artificial Intelligence Books to Read in 2020 - KDnuggets

#artificialintelligence

Artificial intelligence (AI) is being talked about everywhere these days, and it impacts our lives whether we realize it or not. This will continue to increase, so now's a great time to learn more about the subject. Here are some AI-related books that I've read and recommend for you to add to your 2020 reading list! A shameless plug given that I wrote this book, although I believe it will provide significant value to a wide-ranging audience. It's becoming imperative for business leaders to understand artificial intelligence and machine learning at an appropriate level in order to build great data-centric products and solutions.