Deep Learning


We could soon have ROBOTS cleaning our messy bedrooms

Daily Mail

A Japanese tech start-up is using deep learning to teach a pair of machines a simple job for a human, but a surprisingly tricky task for a robot - cleaning a bedroom. Though it may seem like a basic, albeit tedious, task for a human, robots find this type of job surprisingly complicated. A Japanese tech start-up is using deep learning to teach AI how to deal with disorder and chaos in a child's room. Deep learning is where algorithms, inspired by the human brain, learn from large amounts of data so they're able to perform complex tasks. Some tasks, like welding car chassis in the exact same way day after day, are easy for robots as it is a repetitive process and the machines do not suffer with boredom in the same way as disgruntled employees.


Vivienne Sze wins Edgerton Faculty Award

MIT News

Vivienne Sze, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), has received the 2018-2019 Harold E. Edgerton Faculty Achievement Award. The award, announced at today's MIT faculty meeting, commends Sze for "her seminal and highly regarded contributions in the critical areas of deep learning and low-power video coding, and for her educational successes and passion in championing women and under-represented minorities in her field." Sze's research involves the co-design of energy-aware signal processing algorithms and low-power circuit, architecture, and systems for a broad set of applications, including machine learning, computer vision, robotics, image processing, and video coding. She is currently working on projects focusing on autonomous navigation and embedded artificial intelligence (AI) for health-monitoring applications. "In the domain of deep learning, [Sze] created the Eyeriss chip for accelerating deep learning algorithms, building a flexible architecture to handle different convolutional shapes," the Edgerton Faculty Award selection committee said in announcing its decision.


Can science writing be automated?

MIT News

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand. Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two. Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists scan a large number of papers to get a preliminary sense of what they're about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition. The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a senior scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.


A Deep Dive into Deep Learning

#artificialintelligence

On Wednesday, March 27, the 2018 Turing Award in computing was given to Yoshua Bengio, Geoffrey Hinton and Yann LeCun for their work on deep learning. Deep learning by complex neural networks lies behind the applications that are finally bringing artificial intelligence out of the realm of science fiction into reality. Voice recognition allows you to talk to your robot devices. Image recognition is the key to self-driving cars. But what, exactly, is deep learning?


A Gentle Introduction to Convolutional Layers for Deep Learning Neural Networks

#artificialintelligence

Convolution and the convolutional layer are the major building blocks used in convolutional neural networks. A convolution is the simple application of a filter to an input that results in an activation. Repeated application of the same filter to an input results in a map of activations called a feature map, indicating the locations and strength of a detected feature in an input, such as an image. The innovation of convolutional neural networks is the ability to automatically learn a large number of filters in parallel specific to a training dataset under the constraints of a specific predictive modeling problem, such as image classification. The result is highly specific features that can be detected anywhere on input images. In this tutorial, you will discover how convolutions work in the convolutional neural network.


Introduction to Deep Q-Learning for Reinforcement Learning (in Python)

#artificialintelligence

I have always been fascinated with games. The seemingly infinite options available to perform an action under a tight timeline – it's a thrilling experience. So when I read about the incredible algorithms DeepMind was coming up with (like AlphaGo and AlphaStar), I was hooked. I wanted to learn how to make these systems on my own machine. And that led me into the world of deep reinforcement learning (Deep RL).


Postdoctoral Fellow Positions ai-jobs.net

#artificialintelligence

Are you a junior researcher with the potential to become a world-class machine learning scientist? Apply to become a Vector Institute Postdoctoral Fellow and conduct cutting-edge fundamental research in machine learning and deep learning algorithms and their applications. Postdoctoral fellows at the Vector Institute are junior researchers with the potential to become world-class researchers. Like postdoctoral researchers in a University lab, postdoctoral fellows at the Vector Institute are tasked with and supported in carrying out state-of-the-art research, publishing at the highest international level, and contributing to the academic life and reputation of the Institute. In addition, postdoctoral fellows at the Vector Institute have access to the resources of a well-funded institute dedicated solely to machine learning and deep learning, and are encouraged to work with any of our over 25 world-class faculty in machine learning and deep learning, though they will typically work primarily with 1–2 faculty members.


Deep learning processing unit delivers 135 GOPS/W on midrange FPGAs

#artificialintelligence

The Omnitek deep learning processing unit (DPU) employs a novel mathematical framework combining low-precision fixed point maths with floating point maths to achieve 135 GOPS/W at full 32-bit floating point accuracy when running the VGG-16 CNN in an Arria 10 GX 1150. Scalable across a wide range of Arria 10 GX and Stratix 10 GX devices, the DPU can be tuned for low cost or high performance in either embedded or data centre applications. The DPU is fully software programmable in C/C or Python using standard frameworks such as TensorFlow, enabling it to be configured for a wide range of standard CNN models including GoogLeNet, ResNet-50 and VGG-16 as well as custom models. No FPGA design expertise is required to do this. "We are very excited to apply this unique innovation, resulting from our joint research program with Oxford University, to reducing the cost of a whole slew of AI-enabled applications, particularly in video and imaging where we have a rich library of highly optimised IP to complement the DPU and create complete systems on a chip", commented Roger Fawcett, CEO at Omnitek.


Bringing Deep Learning for Geospatial Applications to Life

#artificialintelligence

Whenever we start to talk about artificial intelligence, machine learning, or deep learning, the cautionary tales from science fiction cinema arise: HAL 9000 from 2001: A Space Odyssey, the T-series robots from Terminator, replicants from Blade Runner, there are hundreds of stories about computers learning too much and becoming a threat. The crux of these movies always has one thing in common: there are things that computers do well and things that humans can do well, and they don't necessarily intersect. Computers are really good at crunching numbers and statistical analysis (deductive reasoning) and humans are really good at recognizing patterns and making inductive decisions using deductive data. Both have their strengths and their role. With the massive proliferation of data across platforms, types, and collection schedules, how are geospatial specialists supposed to address this apparently insurmountable task?


r/MachineLearning - [R] HARK Side of Deep Learning -- From Grad Student Descent to Automated Machine Learning

#artificialintelligence

Abstract: Recent advancements in machine learning research, i.e., deep learning, introduced methods that excel conventional algorithms as well as humans in several complex tasks, ranging from detection of objects in images and speech recognition to playing difficult strategic games. However, the current methodology of machine learning research and consequently, implementations of the real-world applications of such algorithms, seems to have a recurring HARKing (Hypothesizing After the Results are Known) issue. In this work, we elaborate on the algorithmic, economic and social reasons and consequences of this phenomenon. Furthermore, a potential future trajectory of machine learning research and development from the perspective of accountable, unbiased, ethical and privacy-aware algorithmic decision making is discussed. We would like to emphasize that with this discussion we neither claim to provide an exhaustive argumentation nor blame any specific institution or individual on the raised issues.