Neural Networks


Programming & Hardware R-tificialIntelligence

#artificialintelligence

"exploring the humanizing of AI by building a digital brain which can be used as a platform for autonomously animating hyper-realistic digital humans" "I think what will be increasingly important in the digital human space is ethics, as they relate both to the digital human and to the real-life people who may be impacted. From a digital human perspective, companies are essentially birthing entities which, in many cases, are expected to form meaningful connections and relationships with people. So how organizations treat these digital humans--including any decision to dispose of them if they are no longer deemed needed--will increasingly become important. On the flipside, entertainment organizations that are using digital humans run the risk of causing concern of replacing real humans […] and it will be important to clarify how and why digital humans are being used in lieu of the'real' thing." Excerpts from this article: The Virtual Beings Are Arriving Efficient deployment of deep learning models requires specialized neural network architectures to best fit different hardware platforms and efficiency constraints (defined as deployment scenarios).


The Illustrated GPT-2 (Visualizing Transformer Language Models)

#artificialintelligence

This year, we saw a dazzling application of machine learning. The OpenAI GPT-2 exhibited impressive ability of writing coherent and passionate essays that exceed what we anticipated current language models are able to produce. The GPT-2 wasn't a particularly novel architecture – it's architecture is very similar to the decoder-only transformer. The GPT2 was, however, a very large, transformer-based language model trained on a massive dataset. In this post, we'll look at the architecture that enabled the model to produce its results. We will go into the depths of its self-attention layer. My goal here is to also supplement my earlier post, The Illustrated Transformer, with more visuals explaining the inner-workings of transformers, and how they've evolved since the original paper. My hope is that this visual language will hopefully make it easier to explain later Transformer-based models as their inner-workings continue to evolve.


Detecting Brain Lesions in Multiple Sclerosis Patients with Deep Learning

#artificialintelligence

One of the most promising applications of deep learning is image analysis (as part of computer vision), e.g. for image segmentation or classification. Whereas segmentation yields a probability distribution (also known as mask) for each class per pixel (i.e. each pixel belongs to 1 of K classes), classification does so for the whole image (i.e. each image belongs to 1 of K classes). Software solutions can be encountered nearly everywhere nowadays, for example in medical image analysis. In clinical research, where novel medications are tested, sometimes it is of interest if a drug can change the condition of a tissue, e.g. Medical images are created by imaging techniques such as medical ultrasound, X-ray, computed tomography (CT), magnetic resonance imaging (MRI), or even regular microscopes.


r/MachineLearning - [D] Benchmarking /Transformers on both PyTorch and TensorFlow

#artificialintelligence

Since our recent release of Transformers (previously known as pytorch-pretrained-BERT and pytorch-transformers), we've been working on a comparison between the implementation of our models in PyTorch and in TensorFlow. We've released a detailed report where we benchmark each of the architectures hosted on our repository (BERT, GPT-2, DistilBERT, ...) in PyTorch with and without TorchScript, and in TensorFlow with and without XLA. We benchmark them for inference and the results are visible in the following spreadsheet. We would love to hear your thoughts on the process.


A Primer on Machine Learning and Deep Learning for Educators

#artificialintelligence

The field of learning has evolved drastically over the years. With the advent of e-learning and learning management systems, the process of learning has gone beyond the traditional model of classroom training. Now it is possible for instructors and teachers to reach a wider, international audience through online courses hosted on cloud based LMS platforms. Students can access these courses from any place in the world at any time, by simply logging into their account using their login credentials. Although e-learning is a complete and self-sustainable medium for imparting knowledge, it also works well in conjunction with traditional classroom training.


The US Army Wants to Reinvent Tank Warfare with AI

#artificialintelligence

Tank warfare isn't traditionally easy to predict. In July 1943, for instance, German military planners believed that their advance on the Russian city of Kursk would be over in ten days. In fact, that attempt lasted nearly two months and ultimately failed. Even the 2003 Battle of Baghdad, in which U.S. forces had air superiority, took a week. The U.S. Army has launched a new effort, dubbed Project Quarterback, to accelerate tank warfare by synchronizing battlefield data with the aid of artificial Intelligence.


Knowing Your Neighbours: Machine Learning on Graphs

#artificialintelligence

We live in a connected world and generate a vast amount of connected data. Social networks, financial transaction systems, biological networks, transportation systems and a telecommunication nexus are all examples. The paper citation network displayed in Figure 1 is another example of connected data. Representing connected data is possible using a graph data structure regularly used in Computer Science. In this article, we will provide an introduction to the assorted types of connected data, what they represent, and the challenges we can solve.


How to Become More Marketable as a Data Scientist

#artificialintelligence

This headline may seem a bit odd to you. Since data science has a huge impact on today's businesses, the demand for DS experts is growing. At the moment I'm writing this, there are 144,527 data science jobs on LinkedIn alone. But still, it's important to keep your finger on the pulse of the industry to be aware of the fastest and most efficient data science solutions. To help you out, our data-obsessed CV Compiler team analyzed some vacancies and defined the data science employment trends of 2019.


A Deepfake Deep Dive into the Murky World of Digital Imitation

#artificialintelligence

About a year ago, top deepfake artist Hao Li came to a disturbing realization: Deepfakes, i.e. the technique of human-image synthesis based on artificial intelligence (AI) to create fake content, is rapidly evolving. In fact, Li believes that in as soon as six months, deepfake videos will be completely undetectable. And that's spurring security and privacy concerns as the AI behind the technology becomes commercialized – and gets in the hands of malicious actors. Li, for his part, has seen the positives of the technology as a pioneering computer graphics and vision researcher, particularly for entertainment. He has worked his magic on various high-profile deepfake applications – from leading the charge in putting Paul Walker into Furious 7 after the actor died before the film finished production, to creating the facial-animation technology that Apple now uses in its Animoji feature in the iPhone X.


The Simplest Neural Network: Understanding the non-linearity

#artificialintelligence

The first neural network you want to build using squaring of numbers. Every time you want to learn about NNs or data science or AI, you search through google, you go through Reddit, get some GitHub codes. There is MNIST dataset, GANs, convolution layers, everywhere. Everybody is talking about neural networks. You pick up your laptop, run the code, Voila! it works.