Deep Learning


Fish Detection Using Deep Learning

#artificialintelligence

Recently, human being's curiosity has been expanded from the land to the sky and the sea. Besides sending people to explore the ocean and outer space, robots are designed for some tasks dangerous for living creatures. Take the ocean exploration for an example. There are many projects or competitions on the design of Autonomous Underwater Vehicle (AUV) which attracted many interests. Authors of this article have learned the necessity of platform upgrade from a previous AUV design project, and would like to share the experience of one task extension in the area of fish detection. Because most of the embedded systems have been improved by fast growing computing and sensing technologies, which makes them possible to incorporate more and more complicated algorithms. In an AUV, after acquiring surrounding information from sensors, how to perceive and analyse corresponding information for better judgement is one of the challenges. The processing procedure can mimic human being's learning routines. An advanced system with more computing power can facilitate deep learning feature, which exploit many neural network algorithms to simulate human brains. In this paper, a convolutional neural network (CNN) based fish detection method was proposed.


Machine Learning Improves Satellite Rainfall Estimates - Eos

#artificialintelligence

Spaceborne precipitation observing systems can provide global coverage but estimates typically suffer from uncertainties and biases. Conversely, ground based systems such as rain gauges and precipitation radar have higher accuracy but only limited spatial coverage. Chen et al. [2019] have developed a novel deep learning algorithm designed to construct a hybrid rainfall estimation system, where the ground radar is used to bridge the scale gaps between (accurate) rain gauge measurements and (less accurate) satellite observations. Such a non-parametric deep learning technique shows the potential for regional and global rainfall mapping and can also be expanded as a data fusion platform through incorporation of additional precipitation estimates such as outputs of numerical weather prediction models.


Machine Learning Improves Satellite Rainfall Estimates - Eos

#artificialintelligence

Spaceborne precipitation observing systems can provide global coverage but estimates typically suffer from uncertainties and biases. Conversely, ground based systems such as rain gauges and precipitation radar have higher accuracy but only limited spatial coverage. Chen et al. [2019] have developed a novel deep learning algorithm designed to construct a hybrid rainfall estimation system, where the ground radar is used to bridge the scale gaps between (accurate) rain gauge measurements and (less accurate) satellite observations. Such a non-parametric deep learning technique shows the potential for regional and global rainfall mapping and can also be expanded as a data fusion platform through incorporation of additional precipitation estimates such as outputs of numerical weather prediction models.


MIT 6.S191: Introduction to Deep Learning

#artificialintelligence

Sign in to report inappropriate content. MIT Introduction to Deep Learning 6.S191: Lecture 1 *New 2019 Edition* Foundations of Deep Learning Lecturer: Alexander Amini January 2019 For all lectures, slides and lab materials: http://introtodeeplearning.com



Top 9 Libraries You Can Use In Large-Scale AI Projects

#artificialintelligence

Using machine learning to solve hard problems and building profitable businesses is almost mainstream now. This rise was accompanied by the introduction of several toolkits, frameworks and libraries, which made the developers' job easy. In the first case, there are tools and approaches, often tedious, to scrape and gather data. However, in the latter case, a data surge will bring its own set of problems. These problems can range from feature engineering to storage to computational overkill.


Understanding Deep Self-attention Mechanism in Convolution Neural Networks

#artificialintelligence

In order to implement global reference for each pixel-level prediction, Wang et al. proposed self-attention mechanism in CNN (Figure 1). Their approach is based on covariance between the predicted pixel and every other pixel, in which each pixel is considered as a random variable. If we reduce the original Figure 1 to the simplest form as Figure 1, we can easily understand the role covariance plays in the mechanism. Firstly, we have input feature map X with height H and width W. Then we reshape X into three 1-dimensional vectors A, B and C, multiplying A and B to get the covariance matrix with size HWxHW. Finally, we multiply the covariance matrix with C, getting D and reshape it to the output feature map Y with a Resnet connection from input X.


Biomedical Image Segmentation: Attention U-Net

#artificialintelligence

Medical image segmentation has been actively studied to automate clinical analysis. Deep learning models generally require a large amount of data, but acquiring medical images is tedious and error-prone. Attention U-Net aims to automatically learn to focus on target structures of varying shapes and sizes; thus, the name of the paper "learning where to look for the Pancreas" by Oktay et al. U-Nets are commonly used for image segmentation tasks because of its performance and efficient use of GPU memory. It aims to achieve high precision that is reliable for clinical usage with fewer training samples because acquiring annotated medical images can be resource-intensive.


Deep learning enables real-time imaging around corners: Detailed, fast imaging of hidden objects could help self-driving cars detect hazards

#artificialintelligence

"Compared to other approaches, our non-line-of-sight imaging system provides uniquely high resolutions and imaging speeds," said research team leader Christopher A. Metzler from Stanford University and Rice University. "These attributes enable applications that wouldn't otherwise be possible, such as reading the license plate of a hidden car as it is driving or reading a badge worn by someone walking on the other side of a corner." In Optica, The Optical Society's journal for high-impact research, Metzler and colleagues from Princeton University, Southern Methodist University, and Rice University report that the new system can distinguish submillimeter details of a hidden object from 1 meter away. The system is designed to image small objects at very high resolutions but can be combined with other imaging systems that produce low-resolution room-sized reconstructions. "Non-line-of-sight imaging has important applications in medical imaging, navigation, robotics and defense," said co-author Felix Heide from Princeton University.


Deep learning vs. machine learning: Understand the differences

#artificialintelligence

Machine learning and deep learning are both forms of artificial intelligence. You can also say, correctly, that deep learning is a specific kind of machine learning. Both machine learning and deep learning start with training and test data and a model and go through an optimization process to find the weights that make the model best fit the data. Both can handle numeric (regression) and non-numeric (classification) problems, although there are several application areas, such as object recognition and language translation, where deep learning models tend to produce better fits than machine learning models. Machine learning algorithms are often divided into supervised (the training data are tagged with the answers) and unsupervised (any labels that may exist are not shown to the training algorithm).