Goto

Collaborating Authors

Deep Learning


Facial Expressions Recognition using Keras Live Project - 2nd Part

#artificialintelligence

Keras with TensorFlow Course - Python Deep Learning and Neural Networks for Beginners Tutorial: How to use Keras, a neural network API written in Python and integrated with TensorFlow. We will learn how to prepare and process data for artificial neural networks, build and train artificial neural networks from scratch, build and train convolutional neural networks (CNNs), implement fine-tuning and transfer learning, and more!


10 Ways AI Is Transforming Enterprise Software - InformationWeek

#artificialintelligence

If you are currently in the market for almost any kind of enterprise software, you will almost certainly run across at least one vendor claiming that its product includes artificial intelligence (AI) capabilities. Of course, some of these claims are no more than marketing hyperbole, or "AI washing." However, in many cases, software makers truly are integrating new capabilities related to analytics, vision, natural language, or other areas that deserve the AI label. The market researchers at IDC have gone so far as to call AI "inescapable." Similarly, Omdia Tractica predicted that worldwide revenue from AI software will climb from $10.1 billion in 2018 to $126.0 billion in 2025, led in large part by advancements in deep learning technology.


A Beginner's Guide to Face Recognition with OpenCV in Python - Sefik Ilkin Serengil

#artificialintelligence

OpenCV becomes a de facto standard for image processing studies. The library offers some legacy techniques for face recognition as well. Local binary patterns histograms (LBPH), EigenFace and FisherFace methods are covered in the package. It is a fact that these conventional face recognition algorithms ARE NOT state-of-the-art techniques anymore. Nowadays, CNN based deep learning approaches overperform than these old-fashioned methods.


Neural Networks Part 2: Building Neural Networks & Understanding Gradient Descent.

#artificialintelligence

From the previous article, we learnt how a single neuron or perceptron works by taking the dot product of input vectors and weights,adding bias and then applying non-linear activation function to produce output.Now let's take that information and see how these neurons build up to a neural network. Now z W0 xj*wj denotes the dot product of input vectors and weights and our final output y is just activation function applied on z. Now,if we want a multi output neural network(from the diagram above),we can simply add one of these perceptrons & we have two outputs with a different set of weights and inputs.Since all the inputs are densely connected to all the outputs,these layers are also called as Dense layers.To implement this layer, we can use many libraries such keras,tensorflow,pytorch,etc. Here it shows the tensorflow implementation of this 2 perceptron network where units 2 indicate we have two outputs in this layer.We can customize this layer by adding activation function,bias constraint etc. Now,let's take a step further and let's understand how a single layer neural network works where we have a single hidden layer which feeds into the output layer. We call this a hidden layer because unlike our input and output layer which we can see or observe them.Our hidden layers are not directly observable,we can probe inside the network and see them using tools such as Netron but we can't enforce it as these are learned .


Review: DilatedNet -- Dilated Convolution (Semantic Segmentation)

#artificialintelligence

This time, Dilated Convolution, from Princeton University and Intel Lab, is briefly reviewed. The idea of Dilated Convolution is come from the wavelet decomposition. Thus, any ideas from the past are still useful if we can turn them into the deep learning framework. And this dilated convolution has been published in 2016 ICLR with more than 1000 citations when I was writing this story.


A Star Wars Story by Sentient Droid

#artificialintelligence

Imagine, droids came to the 21st century with the knowledge of the future but only had current technology to rewrite their Star Wars story. In this article, we will see how a droid (machine learning model) generates its Star Wars story using knowledge of the future (Star Wars books). The model takes the input sequence of words. We use LSTM to understand the context in a sentence. Since simple RNN would have vanishing gradient problem, so for the text generation I am using LSTM.


Deep Learning for COVID-19 Diagnosis

#artificialintelligence

Over the last several months, the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has rapidly become a global pandemic, resulting in nearly 480,000 COVID-19 related deaths as of June 25, 2020 [6]. While the disease can manifest in a variety of ways--ranging from asymptomatic conditions or flu-like symptoms to acute respiratory distress syndrome--the most common presentation associated with morbidity and mortality is the presence of opacities and consolidation in a patient's lungs. Upon inhalation, the virus attacks and inhibits the lungs' alveoli, which are responsible for oxygen exchange. This opacification is visible on computed tomography (CT) scans. Due to their increased densities, these areas appear as partially opaque regions with increased attenuation, which is known as a ground-glass opacity (GGO).


Deep Learning at Scale with PyTorch, Azure Databricks, and Azure Machine Learning

#artificialintelligence

PyTorch is a popular open source machine learning framework. PyTorch is ideal for deep learning applications such as computer vision and natural language processing. MLflow is an open source platform for the end-to-end machine learning lifecycle. Delta Lake is an open source storage layer that brings reliability to data lakes. Azure Databricks is the first-party Databricks service on Azure that provides massive scale data engineering and collaborative data science.


Fujitsu Develops AI Tech for High-Dimensional Data Without Labeled Training Data

#artificialintelligence

In recent years, there has been a surge in demand for AI-driven big data analysis in various business fields. AI is also expected to help support the detection of anomalies in data to reveal things like unauthorized attempts to access networks, or abnormalities in medical data for thyroid values or arrhythmia data. Data used in many business operations is high-dimensional data. As the number of dimensions of data increases, the complexity of calculations required to accurately characterize the data increases exponentially, a phenomenon widely known as the "Curse of Dimensionality"(1). In recent years, a method of reducing the dimensions of input data using deep learning has been identified as a promising candidate for helping to avoid this problem. However, since the number of dimensions is reduced without considering the data distribution and probability of occurrence after the reduction, the characteristics of the data have not been accurately captured, and the recognition accuracy of the AI is limited and misjudgment can occur (Figure 1). Solving these problems and accurately acquiring the distribution and probability of high-dimensional data remain important issues in the AI field.


Approximation spaces of deep neural networks

#artificialintelligence

We study the expressivity of deep neural networks. Measuring a network's complexity by its number of connections or by its number of neurons, we consider the class of functions for which the error of best approximation with networks of a given complexity decays at a certain rate when increasing the complexity budget. Using results from classical approximation theory, we show that this class can be endowed with a (quasi)-norm that makes it a linear function space, called approximation space. We establish that allowing the networks to have certain types of "skip connections" does not change the resulting approximation spaces. We also discuss the role of the network's nonlinearity (also known as activation function) on the resulting spaces, as well as the role of depth. For the popular ReLU nonlinearity and its powers, we relate the newly constructed spaces to classical Besov spaces. The established embeddings highlight that some functions of very low Besov smoothness can nevertheless be well approximated by neural networks, if these networks are sufficiently deep.