Goto

Collaborating Authors

Deep Learning


Amazon makes Alexa Conversations generally available

ZDNet

Amazon on Monday announced the general availability of Alexa Conversations, a deep learning-based dialog manager for the Alexa Skills Kit. The tool, first introduced in preview in 2019, helps developers create more natural conversations with customers. "Natural language is actually a very difficult thing to emulate," Nedim Fresko, Amazon's VP of Alexa Devices and Developer Technologies, told ZDNet last year. "When people speak naturally, they change direction, they make contextual references to things they said. Sometimes they over-supply information, sometimes they under-supply it -- when that happens, consumers revert to robotic language and simple phrases, and developers just give up."


Train Your Custom Deep Learning Model in AWS SageMaker

#artificialintelligence

If you are someone like me who does not want to setup an at home server to train your Deep Learning model, this article is for you. Likely, cloud-based Machine Learning infrastructures are your options. I will go over the step-by-step process of how to do this in AWS SageMaker. Amazon SageMaker comes with a good number of pre-trained models. These models are prebuilt docker images in AWS.


Fast Oriented Text Spotting with a Unified Network (FOTS)

#artificialintelligence

Text detection and recognition (also known as Text Spotting) from an image is a very useful and challenging problem that deep learning researchers have been working on since many years because of its practical applications in fields like document scanning, robot navigation and image retrieval, etc. Almost all the methods consisted of two separate stages so far: 1) Text detection 2) Text recognition. Text detection just finds out where the text is located in the given image and on these results, text recognition actually recognizes the characters from the text. Because of these two stages, two separate models were required to be trained and hence prediction time was a bit higher. Because of higher test time, the models were not suitable for real time applications. Contrary to this, FOTS solves this two stage problem using a unified end to end trainable model/network by detecting and recognizing text simultaneously.


Deep Learning: Convolutional Neural Networks in Python

#artificialintelligence

For Data Science, Machine Learning, and AI Created by Lazy Programmer Inc. English [Auto], Italian [Auto], Preview this Udemy Course GET COUPON CODE Description *** NOW IN TENSORFLOW 2 and PYTHON 3 *** Learn about one of the most powerful Deep Learning architectures yet! The Convolutional Neural Network (CNN) has been used to obtain state-of-the-art results in computer vision tasks such as object detection, image segmentation, and generating photo-realistic images of people and things that don't exist in the real world! This course will teach you the fundamentals of convolution and why it's useful for deep learning and even NLP (natural language processing). You will learn about modern techniques such as data augmentation and batch normalization, and build modern architectures such as VGG yourself. This course will teach you: The basics of machine learning and neurons (just a review to get you warmed up!) Neural networks for classification and regression (just a review to get you warmed up!) How to model image data in code How to model text data for NLP (including preprocessing steps for text) How to build an CNN using Tensorflow 2 How to use batch normalization and dropout regularization in Tensorflow 2 How to do image classification in Tensorflow 2 How to do data preprocessing for your own custom image dataset How to use Embeddings in Tensorflow 2 for NLP How to build a Text Classification CNN for NLP (examples: spam detection, sentiment analysis, parts-of-speech tagging, named entity recognition) All of the materials required for this course can be downloaded and installed for FREE.


Applications of Artificial Intelligence for Retinopathy of Prematurity Screening - Docwire News

#artificialintelligence

OBJECTIVES: Childhood blindness from retinopathy of prematurity (ROP) is increasing as a result of improvements in neonatal care worldwide. We evaluate the effectiveness of artificial intelligence (AI)-based screening in an Indian ROP telemedicine program and whether differences in ROP severity between neonatal care units (NCUs) identified by using AI are related to differences in oxygen-titrating capability. All images were assigned an ROP severity score (1-9) by using the Imaging and Informatics in Retinopathy of Prematurity Deep Learning system. We calculated the area under the receiver operating characteristic curve and sensitivity and specificity for treatment-requiring retinopathy of prematurity. Using multivariable linear regression, we evaluated the mean and median ROP severity in each NCU as a function of mean birth weight, gestational age, and the presence of oxygen blenders and pulse oxygenation monitors.


Google's deep learning finds a critical path in AI chips

#artificialintelligence

Characteristic to many AI chips are parallel, identical processor elements for masses of simple math operations, here called a "PE," for doing lots of vector-matrix multiplications that are the workhorse of neural net processing. A year ago, ZDNet spoke with Google Brain director Jeff Dean about how the company is using artificial intelligence to advance its internal development of custom chips to accelerate its software. Dean noted that deep learning forms of artificial intelligence can in some cases make better decisions than humans about how to lay out circuitry in a chip. This month, Google unveiled to the world one of those research projects, called Apollo, in a paper posted on the arXiv file server, "Apollo: Transferable Architecture Exploration," and a companion blog post by lead author Amir Yazdanbakhsh. Apollo represents an intriguing development that moves past what Dean hinted at in his formal address a year ago at the International Solid State Circuits Conference, and in his remarks to ZDNet.


Hot papers on arXiv from the past month – February 2021

AIHub

Abstract: Conceptual abstraction and analogy-making are key abilities underlying humans' abilities to learn, reason, and robustly adapt their knowledge to new domains. Despite of a long history of research on constructing AI systems with these abilities, no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies. This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction. The paper concludes with several proposals for designing challenge tasks and evaluation measures in order to make quantifiable and generalizable progress in this area.


A High Schooler's Guide To Deep Learning And AI

#artificialintelligence

The idea of creating a virtual human that can converse seamlessly with a user seems daunting to most people who are just getting into artificial intelligence and looking into how utterly complex existing commercial systems are. And their fears aren't misled - larger systems that contain a plethora of data samples and an intricate network architecture, and are responsible for providing the highest quality home assistant system are very difficult to replicate. But, creating virtual assistants at a smaller level has already been simplified to allow virtually anyone to make their own conversational persona. Over the past decade, the University of Southern California's Institute for Creative Technologies has developed countless virtual personalities for a variety of reasons: The institute has been able to create the amount of virtual humans as they have because of the technology they developed titled'NPCEditor'. As the name implies, the program allows the team to edit an NPC, or non-player-character. Developed by research scientist Anton Leuski and lead professor of NLP David Traum, the software has been simplified enough so that it is incredibly easy to create a virtual human.


Why 0.9? Towards Better Momentum Strategies in Deep Learning.

#artificialintelligence

Momentum is a widely-used strategy for accelerating the convergence of gradient-based optimization techniques. Momentum was designed to speed up learning in directions of low curvature, without becoming unstable in directions of high curvature. In deep learning, most practitioners set the value of momentum to 0.9 without attempting to further tune this hyperparameter (i.e., this is the default value for momentum in many popular deep learning packages). However, there is no indication that this choice for the value of momentum is universally well-behaved. Within this post, we overview recent research indicating that decaying the value of momentum throughout training can aid the optimization process.


NFNets Explained -- DeepMind's New State-Of-The-Art Image Classifier

#artificialintelligence

DeepMind has recently released a new family of image classifiers that achieved a new state-of-the-art accuracy on the ImageNet dataset. This new family of image classifiers, named NFNets (short for Normalizer-Free Networks), achieves comparable accuracy to EfficientNet-B7, while having a whopping 8.7x faster train time. This improvement in training speed was partly achieved by replacing batch normalization with other techniques. This represents an important paradigm shift in the world of image classifiers, which has relied heavily on batch normalization as a key component. First, let's understand the benefits that batch normalization brings. With that knowledge, we can then devise alternative methods that recover these benefits.