Deep Learning


Deep learning Data Sets for Every Data Scientist

#artificialintelligence

Machine Learning has seen a tremendous rise in the last decade, and one of its sub-fields which has contributed largely to its growth is Deep Learning. The large volumes of data and the huge computation power that modern system possess has given Data Scientist, Machine Learning Engineers, and others to achieve ground-breaking results in the Deep Learning and continue to bring in new developments in this field. In this blog post, we would cover the deep learning data sets that you could work with as a Data Scientist but before that, we would provide an intuition about the concept of Deep Learning. A sub-field of Machine Learning, the working structure of Deep Learning is similar to our brain known as the Artificial Neural Networks. It is similar to our nervous system where each neuron connected to each other.


Prediction of a plant intracellular metabolite content class using image-based deep learning

#artificialintelligence

Plant-derived secondary metabolites play a vital role in the food, pharmaceutical, agrochemical and cosmetic industry. Metabolite concentrations are measured after extraction, biochemistry and analyses, requiring time, access to expensive equipment, reagents and specialized skills. Additionally, metabolite concentration often varies widely among plants, even within a small area. A quick method to estimate the metabolite concentration class (high or low) will significantly help in selecting trees yielding high metabolites for the metabolite production process. Here, we demonstrate a deep learning approach to estimate the concentration class of an intracellular metabolite, azadirachtin, using models built with images of leaves and fruits collected from randomly selected Azadirachta indica (neem) trees in an area spanning 500,000 sqkms and their corresponding biochemically measured metabolite concentrations.


Generating Character Animations from Speech with AI - NVIDIA Developer News Center

#artificialintelligence

Researchers from the Max Planck Institute for Intelligent Systems, a member of NVIDIA's NVAIL program, developed an end-to-end deep learning algorithm that can take any speech signal as input – and realistically animate it in a wide range of adult faces. "There is an extensive literature on estimating 3D face shape, facial expressions, and facial motion from images and videos. Less attention has been paid to estimating 3D properties of faces from sound," the researchers stated in their paper. "Understanding the correlation between speech and facial motion thus provides additional valuable information for analyzing humans, particularly if visual data are noisy, missing, or ambiguous." The team first collected a new dataset of 4D face scans together with speech.


How Deep Learning Is Transforming Brain Mapping

#artificialintelligence

Thanks to deep learning, the tricky business of making brain atlases just got a lot easier. Brain maps are all the rage these days. From rainbow-colored dots that highlight neurons or gene expression across the brain, to neon "brush strokes" that represent neural connections, every few months seem to welcome a new brain map. Without doubt, these maps are invaluable for connecting the macro (the brain's architecture) to the micro (genetic profiles, protein expression, neural networks) across space and time. Scientists can now compare brain images from their own experiments to a standard resource.


Deep learning for detecting inappropriate content in text

#artificialintelligence

Today, there are a large number of online discussion fora on the internet which are meant for users to express, discuss and exchange their views and opinions on various topics. In such fora, it has been often observed that user conversations sometimes quickly derail and become inappropriate such as hurling abuses, passing rude and discourteous comments on individuals or certain groups/communities. Similarly, some virtual agents or bots have also been found to respond back to users with inappropriate messages. As a result, inappropriate messages or comments are turning into an online menace slowly degrading the effectiveness of user experiences. Hence, automatic detection and filtering of such inappropriate language has become an important problem for improving the quality of conversations with users as well as virtual agents.


Deep Learning with Merrill Grambell Improv Asylum

#artificialintelligence

We live in a world where everything is connected, smart fridges, salt dispensers, egg timers, and even hair brushes. That sounds like a Ummm, ok-ish idea. Well, that's what this show is. Deep Learning with Merrill Grambell is the first show completely hosted by artificial intelligence as it interacts with real-life comedians, musicians, technologists and other interesting guests from all over the world. It's sort of as if Clippy from Microsoft Word and Bonzi Buddy got together and made a talk show.


Deep Learning–based Image Conversion of CT Reconstruction Kernels Improves Radiomics Reproducibility for Pulmonary Nodules or Masses

#artificialintelligence

Intratumor heterogeneity in lung cancer may influence outcomes. CT radiomics seeks to assess tumor features to provide detailed imaging features. However, CT radiomic features vary according to the reconstruction kernel used for image generation. To investigate the effect of different reconstruction kernels on radiomic features and assess whether image conversion using a convolutional neural network (CNN) could improve reproducibility of radiomic features between different kernels. In this retrospective analysis, patients underwent non–contrast material–enhanced and contrast material–enhanced axial chest CT with soft kernel (B30f) and sharp kernel (B50f) reconstruction using a single CT scanner from April to June 2017.


Using AI To Analyze Video As Imagery: The Impact Of Sampling Rate

#artificialintelligence

Plate from Muybridge's Animal Locomotion series published in 1887. Deep learning has become the dominate lens through which machines understand video. Yet video files consume huge amounts of storage space and are extremely computationally demanding to analyze using deep learning. Certain use cases can benefit from converting videos to sequences of still images for analysis, enabling full data parallelism and vast reductions in data storage and computation. Representing video as still imagery also presents unique opportunities for non-consumptive analysis similar to the use of ngrams for text.


We Must Stop Comparing Deep Learning's Real Accuracy To Nonexistent Human Perfection

#artificialintelligence

As deep learning has become ubiquitous, evaluations of its accuracy typically compare its performance against an idealized baseline of flawless human results that bear no resemblance to the actual human workflow those algorithms are being designed to replace. For example, the accuracy of real-time algorithmic speech recognition is frequently compared against human captioning produced in offline multi-coder reconciled environments and subjected to multiple reviews to generate flawless content that looks absolutely nothing like actual real-time human transcription. If we really wish to understand the usability of AI today we should be comparing it against the human workflows it is designed to replace, not an impossible vision of nonexistent human perfection. While the press is filled with the latest superhuman exploits of bleeding-edge research AI systems besting humans at yet another task, the reality of production AI systems is far more mundane. Most commercial applications of deep learning can achieve higher accuracy than their human counterparts at some tasks and worse performance on others.


10 European experts who have been paving the way to modern AI

#artificialintelligence

When asked why he robbed banks, Willie Sutton famously replied, "Because that's where the money is". And so much of artificial antelligence evolved in the United States – because that's where the computers were. However with Europe's strong educational institutions, the path to advanced AI technologies has been cleared by European computer scientists, neuroscientists, and engineers – many of whom were later poached by US universities and companies. From backpropagation to Google Translate, deep learning, and the development of more advanced GPUs permitting faster processing and rapid developments in AI over the past decade, some of the greatest contributions to AI have come from European minds. Modern AI can be traced back to the work of the English mathematician Alan Turing, who in early 1940 designed the bombe – an electromechanical precursor to the modern computer (itself based on previous work by Polish scientists) that broke the German military codes in World War II.