deep learning

Three Benefits to Deploying Artificial Intelligence in Radiology Workflows


Artificial Intelligence (AI) has the capability to provide radiologists with tools to improve their productivity, decision making and effectiveness and will lead to quicker diagnosis and improved patient outcomes. It will initially deploy as a diverse collection of assistive tools to augment, quantify and stratify the information available to the diagnostician, and offer a major opportunity to enhance and augment the radiology reading. It will improve access to medical record information and give radiologists more time to think about what is going on with patients, diagnose more complex cases, collaborate with patient care teams, and perform more invasive procedures. Deep Learning algorithms in particular will form the foundation for decision and workflow support tools and diagnostic capabilities. Algorithms will provide software the ability to "learn" by example on how to execute a task, then automatically execute those tasks as well as interpret new data.

Liveness Detection with OpenCV - PyImageSearch


In this tutorial, you will learn how to perform liveness detection with OpenCV. You will create a liveness detector capable of spotting fake faces and performing anti-face spoofing in face recognition systems. How do I spot real versus fake faces? Consider what would happen if a nefarious user tried to purposely circumvent your face recognition system. Such a user could try to hold up a photo of another person. Maybe they even have a photo or video on their smartphone that they could hold up to the camera responsible for performing face recognition (such as in the image at the top of this post).

AI pioneer Sejnowski says it's all about the gradient


At the end of the concrete plaza that forms the courtyard of the Salk Institute in La Jolla, California, there is a three-hundred-fifty-foot drop to the Pacific Ocean. Sometimes people explore that drop from high up in a paraglider. If they're less adventuresome, they can walk down a meandering trail that hugs the cliff all the way to the bottom. It's a good spot from which to reflect on the mathematical tool called "stochastic gradient descent," a technique that is at the heart of today's machine learning form of artificial intelligence. Terry Sejnowski has been exploring gradient descent for decades.

Google Maps street view images can be used to detect signs of inequality

Daily Mail

Spotting inequality can now be done by a computer using a pre-existing, vast and easily available database of images - Google Maps street view. More than half a million pictures from this catalogue of'on-the-ground' photos were inputted into a deep learning algorithm which unpicked signs of inequality in London. Data collection was done over 156,581 different postcodes and was then applied to Leeds, Birmingham and Manchester. Overview of the street images and outcome data used in the analysis pictured). Esra Suel and colleagues from Imperial College London used deep-learning to train a computer programme designed to detect signs of austerity.

We could soon have ROBOTS cleaning our messy bedrooms

Daily Mail

A Japanese tech start-up is using deep learning to teach a pair of machines a simple job for a human, but a surprisingly tricky task for a robot - cleaning a bedroom. Though it may seem like a basic, albeit tedious, task for a human, robots find this type of job surprisingly complicated. A Japanese tech start-up is using deep learning to teach AI how to deal with disorder and chaos in a child's room. Deep learning is where algorithms, inspired by the human brain, learn from large amounts of data so they're able to perform complex tasks. Some tasks, like welding car chassis in the exact same way day after day, are easy for robots as it is a repetitive process and the machines do not suffer with boredom in the same way as disgruntled employees.

Vivienne Sze wins Edgerton Faculty Award

MIT News

Vivienne Sze, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), has received the 2018-2019 Harold E. Edgerton Faculty Achievement Award. The award, announced at today's MIT faculty meeting, commends Sze for "her seminal and highly regarded contributions in the critical areas of deep learning and low-power video coding, and for her educational successes and passion in championing women and under-represented minorities in her field." Sze's research involves the co-design of energy-aware signal processing algorithms and low-power circuit, architecture, and systems for a broad set of applications, including machine learning, computer vision, robotics, image processing, and video coding. She is currently working on projects focusing on autonomous navigation and embedded artificial intelligence (AI) for health-monitoring applications. "In the domain of deep learning, [Sze] created the Eyeriss chip for accelerating deep learning algorithms, building a flexible architecture to handle different convolutional shapes," the Edgerton Faculty Award selection committee said in announcing its decision.

Can science writing be automated?

MIT News

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand. Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two. Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists scan a large number of papers to get a preliminary sense of what they're about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition. The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a senior scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

A Deep Dive into Deep Learning


On Wednesday, March 27, the 2018 Turing Award in computing was given to Yoshua Bengio, Geoffrey Hinton and Yann LeCun for their work on deep learning. Deep learning by complex neural networks lies behind the applications that are finally bringing artificial intelligence out of the realm of science fiction into reality. Voice recognition allows you to talk to your robot devices. Image recognition is the key to self-driving cars. But what, exactly, is deep learning?

A Gentle Introduction to Convolutional Layers for Deep Learning Neural Networks


Convolution and the convolutional layer are the major building blocks used in convolutional neural networks. A convolution is the simple application of a filter to an input that results in an activation. Repeated application of the same filter to an input results in a map of activations called a feature map, indicating the locations and strength of a detected feature in an input, such as an image. The innovation of convolutional neural networks is the ability to automatically learn a large number of filters in parallel specific to a training dataset under the constraints of a specific predictive modeling problem, such as image classification. The result is highly specific features that can be detected anywhere on input images. In this tutorial, you will discover how convolutions work in the convolutional neural network.

Introduction to Deep Q-Learning for Reinforcement Learning (in Python)


I have always been fascinated with games. The seemingly infinite options available to perform an action under a tight timeline – it's a thrilling experience. So when I read about the incredible algorithms DeepMind was coming up with (like AlphaGo and AlphaStar), I was hooked. I wanted to learn how to make these systems on my own machine. And that led me into the world of deep reinforcement learning (Deep RL).