Goto

Collaborating Authors

Deep Learning


Review: DilatedNet -- Dilated Convolution (Semantic Segmentation)

#artificialintelligence

This time, Dilated Convolution, from Princeton University and Intel Lab, is briefly reviewed. The idea of Dilated Convolution is come from the wavelet decomposition. Thus, any ideas from the past are still useful if we can turn them into the deep learning framework. And this dilated convolution has been published in 2016 ICLR with more than 1000 citations when I was writing this story.


A Star Wars Story by Sentient Droid

#artificialintelligence

Imagine, droids came to the 21st century with the knowledge of the future but only had current technology to rewrite their Star Wars story. In this article, we will see how a droid (machine learning model) generates its Star Wars story using knowledge of the future (Star Wars books). The model takes the input sequence of words. We use LSTM to understand the context in a sentence. Since simple RNN would have vanishing gradient problem, so for the text generation I am using LSTM.


Deep Learning for COVID-19 Diagnosis

#artificialintelligence

Over the last several months, the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has rapidly become a global pandemic, resulting in nearly 480,000 COVID-19 related deaths as of June 25, 2020 [6]. While the disease can manifest in a variety of ways--ranging from asymptomatic conditions or flu-like symptoms to acute respiratory distress syndrome--the most common presentation associated with morbidity and mortality is the presence of opacities and consolidation in a patient's lungs. Upon inhalation, the virus attacks and inhibits the lungs' alveoli, which are responsible for oxygen exchange. This opacification is visible on computed tomography (CT) scans. Due to their increased densities, these areas appear as partially opaque regions with increased attenuation, which is known as a ground-glass opacity (GGO).


Deep Learning at Scale with PyTorch, Azure Databricks, and Azure Machine Learning

#artificialintelligence

PyTorch is a popular open source machine learning framework. PyTorch is ideal for deep learning applications such as computer vision and natural language processing. MLflow is an open source platform for the end-to-end machine learning lifecycle. Delta Lake is an open source storage layer that brings reliability to data lakes. Azure Databricks is the first-party Databricks service on Azure that provides massive scale data engineering and collaborative data science.


Fujitsu Develops AI Tech for High-Dimensional Data Without Labeled Training Data

#artificialintelligence

In recent years, there has been a surge in demand for AI-driven big data analysis in various business fields. AI is also expected to help support the detection of anomalies in data to reveal things like unauthorized attempts to access networks, or abnormalities in medical data for thyroid values or arrhythmia data. Data used in many business operations is high-dimensional data. As the number of dimensions of data increases, the complexity of calculations required to accurately characterize the data increases exponentially, a phenomenon widely known as the "Curse of Dimensionality"(1). In recent years, a method of reducing the dimensions of input data using deep learning has been identified as a promising candidate for helping to avoid this problem. However, since the number of dimensions is reduced without considering the data distribution and probability of occurrence after the reduction, the characteristics of the data have not been accurately captured, and the recognition accuracy of the AI is limited and misjudgment can occur (Figure 1). Solving these problems and accurately acquiring the distribution and probability of high-dimensional data remain important issues in the AI field.


Approximation spaces of deep neural networks

#artificialintelligence

We study the expressivity of deep neural networks. Measuring a network's complexity by its number of connections or by its number of neurons, we consider the class of functions for which the error of best approximation with networks of a given complexity decays at a certain rate when increasing the complexity budget. Using results from classical approximation theory, we show that this class can be endowed with a (quasi)-norm that makes it a linear function space, called approximation space. We establish that allowing the networks to have certain types of "skip connections" does not change the resulting approximation spaces. We also discuss the role of the network's nonlinearity (also known as activation function) on the resulting spaces, as well as the role of depth. For the popular ReLU nonlinearity and its powers, we relate the newly constructed spaces to classical Besov spaces. The established embeddings highlight that some functions of very low Besov smoothness can nevertheless be well approximated by neural networks, if these networks are sufficiently deep.


Supercharge Your Shallow ML Models With Hummingbird

#artificialintelligence

Since the most recent resurgence of deep learning in 2012, a lion's share of new ML libraries and frameworks have been created. The ones that have stood the test of time (PyTorch, Tensorflow, ONNX, etc) are backed by massive corporations, and likely aren't going away anytime soon. This also presents a problem, however, as the deep learning community has diverged from popular traditional ML software libraries like scikit-learn, XGBoost, and LightGBM. When it comes time for companies to bring multiple models with different software and hardware assumptions into production, things get…hairy. Using microservices in Kubernetes can solve the design pattern issue to an extent by keeping things de-coupled…if that's even what you want?


Reasons to Choose PyTorch for Deep Learning

#artificialintelligence

PYRO: Pyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend. These are a few frameworks and projects that are built on top of TensorFlow and PyTorch. You can find more on Github and the official websites of TF and PyTorch. In a world of TensorFlow, PyTorch is capable of holding on its own with its strong points. PyTorch is a go to framework that lets us write code in a more pythonic way.


Fujitsu develops AI that captures high-dimensional data characteristics - IT-Online

#artificialintelligence

Fujitsu Laboratories has developed what it believes to be the world's first AI technology that accurately captures essential features, including the distribution and probability of high-dimensional data in order to improve the accuracy of AI detection and judgment. High-dimensional data, which includes communications networks access data, types of medical data, and images remain difficult to process due to its complexity, making it a challenge to obtain the characteristics of the target data. Until now, this made it necessary to use techniques to reduce the dimensions of the input data using deep learning, at times causing the AI to make incorrect judgments. Fujitsu has combined deep learning technology with its expertise in image compression technology, cultivated over many years, to develop an AI technology that makes it possible to optimize the processing of high-dimensional data with deep learning technology, and to accurately extract data features. It combines information theory used in image compression with deep learning, optimising the number of dimensions to be reduced in high-dimensional data and the distribution of the data after the dimension reduction by deep learning.


Building a Deep-Learning-Based Movie Recommender System

#artificialintelligence

With the continuous development of network technology and the ever-expanding scale of e-commerce, the number and variety of goods grow rapidly and users need to spend a lot of time to find the goods they want to buy. To solve this problem, the recommendation system came into being. The recommendation system is a subset of the Information Filtering System, which can be used in a range of areas such as movies, music, e-commerce, and Feed stream recommendations. The recommendation system discovers the user's personalized needs and interests by analyzing and mining user behaviors and recommends information or products that may be of interest to the user. Unlike search engines, recommendation systems do not require users to accurately describe their needs but model their historical behavior to proactively provide information that meets user interests and needs.