deep learning


Activation maps for deep learning models in a few lines of code

#artificialintelligence

Deep Learning (DL) models are revolutionizing the business and technology world with jaw-dropping performances in one application area after another -- image classification, object detection, object tracking, pose recognition, video analytics, synthetic picture generation -- just to name a few. However, they are like anything but classical Machine Learning (ML) algorithms/techniques. DL models use millions of parameters and create extremely complex and highly nonlinear internal representations of the images or datasets that are fed to these models. They are, therefore, often called the perfect black-box ML techniques. We can get highly accurate predictions from them after we train them with large datasets, but we have little hope of understanding the internal features and representations of the data that a model uses to classify a particular image into a category.


Machine Learning and Artificial Intelligence in Healthcare Market Projected to Witness Vigorous Expansion by 2019-2027 Intel, IBM, Nvidia, Microsoft, Alphabet (Google), General Electric, Enlitic, Verint Systems, General Vision, Welltok, iCarbonX – Market Expert24

#artificialintelligence

Artificial Intelligence (AI), machine learning, and deep learning are taking the healthcare industry by storm. They are not pie in the sky technologies any longer; they are practical tools that can help companies optimize their service provision, improve the standard of care, generate more revenue, and decrease risk. Nearly all major companies in the healthcare space have already begun to use the technology in practice; here I present some of the important highlights of the implementation, and what they mean for other companies in healthcare. AI, machine learning, and deep learning are already increasing profits in the healthcare industry. For example, according to research firm Frost & Sullivan by 2021, AI systems will generate $6.7 billion in global healthcare industry revenue.


Inspur Open-Sources TF2, a Full-Stack FPGA-Based Deep Learning Inference Engine

#artificialintelligence

Inspur has announced the open-source release of TF2, an FPGA-based efficient AI computing framework. The inference engine of this framework employs the world's first DNN shift computing technology, combined with a number of the latest optimization techniques, to achieve FPGA-based high-performance low-latency deployment of universal deep learning models. This is also the world's first open-sourced FPGA-based AI framework that contains comprehensive solutions ranging from model pruning, compression, quantization, and a general DNN inference computing architecture based on FPGA. The open source project can be found at https://github.com/TF2-Engine/TF2. Many companies and research institutions, such as Kuaishou, Shanghai University, and MGI, are said to have joined the TF2 open source community, which will jointly promote open-source cooperation and the development of AI technology based on customizable FPGAs, reducing the barriers to high-performance AI computing technology, and shortening development cycles for AI users and developers.


Global Big Data Conference

#artificialintelligence

Asked what is the biggest misconception about AI, Yoshua Bengio answered without hesitation "AI is not magic." Winner of the 2018 Turing Award (with the other "fathers of the deep learning revolution," Geoffrey Hinton and Yann LeCun), Bengio spoke at the EmTech MIT event about the "amazing progress in AI" while stressing the importance of understanding its current limitations and recognizing that "we are still very far from human-level AI in many ways." Deep learning has moved us a step closer to human-level AI by allowing machines to acquire intuitive knowledge, according to Bengio. Classical AI was missing this "learning component," and deep learning develops intuitive knowledge "by acquiring that knowledge from data, from interacting with the environment, from learning. That's why current AI is working so much better than the old AI."


New AI program better at detecting depressive language in social media

#artificialintelligence

A new technology using artificial intelligence detects depressive language in social media posts more accurately than current systems and uses less data to do it. The technology, which was presented during the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, is the first of its kind to show that, to more accurately detect depressive language, small, high-quality data sets can be applied to deep learning, a commonly used AI approach that is typically data intensive. Previous psycholinguistic research has shown that the words we use in interaction with others on a daily basis are a good indicator of our mental and emotional state. Past attempts to apply deep learning techniques to detect and monitor depression in social media posts have been shown to be tedious and expensive, explained Nawshad Farruque, a University of Alberta Ph.D. student in computing science who is leading the new study. He explained that a Twitter post saying that somebody is depressed because Netflix is down isn't really expressing depression, so someone would need to "explain" this to the algorithm.


AI Accelerators and open software

#artificialintelligence

Three years ago, we had maybe six or less AI accelerators, today there's over two dozen, and more are coming. One of the first commercially available AI training accelerators was the GPU, and the undisputed leader of that segment was Nvidia. Nvidia was already preeminent in machine learning (ML) and deep-learning (DL) applications and adding neural net acceleration was a logical and rather straight-forward step for the company. Nvidia also brought a treasure-trove of applications with their GPUs based on the company's proprietary development language CUDA. The company developed CUDA in 2006 and empowered hundreds of Universities to give courses on it.


AI Accelerators and open software

#artificialintelligence

Three years ago, we had maybe six or less AI accelerators, today there's over two dozen, and more are coming. One of the first commercially available AI training accelerators was the GPU, and the undisputed leader of that segment was Nvidia. Nvidia was already preeminent in machine learning (ML) and deep-learning (DL) applications and adding neural net acceleration was a logical and rather straight-forward step for the company. Nvidia also brought a treasure-trove of applications with their GPUs based on the company's proprietary development language CUDA. The company developed CUDA in 2006 and empowered hundreds of Universities to give courses on it.


How to Deploy Machine Learning Models on Mobile and Embedded Devices

#artificialintelligence

Thanks to libraries such as Pandas, scikit-learn, and Matplotlib, it is relatively easy to start exploring datasets and make some first predictions using simple Machine Learning (ML) algorithms in Python. Although, to make these trained models useful in the real world, it is necessary to make them available to make predictions on either the Web or Portable devices. In two of my previous articles, I explained how to create and deploy a simple Machine Learning model using Heroku/Flask and Tensorflow.js. Today, I will instead explain to you how to deploy Machine Learning models on Smartphones and Embedded Devices using TensorFlow Lite. TensorFlow Lite is a platform developed by Google to train Machine Learning models on mobile, IoT (Interned of Things) and embedded devices.


50 Most Popular AI-influencers of North America

#artificialintelligence

It has been more than six decades since the concept of Artificial Intelligence has transformed from imagination to an academic discipline. Influencers, especially those active on social media help give direction to the policymakers and academicians. They keep common men updated on the trends and'what is what' in AI, Machine Learning and associated concepts like Big Data and BlockChain. AiThority introduces you to the 50 most popular AI-influencers of North America. A PhD in industrial-organizational psychology, his interests lies in Data Science, CX, Statistics and Machine Learning.


A Deeper Understanding Is Needed To Improve Neural Networks

#artificialintelligence

The development of neural networks is not a new thing. In fact, neural networks have been around since the 1940s, according to MIT News. No one has really been interested in the application of this technology until now. To begin, let's define a neural network. According to the definition by Investopedia: "A neural network is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. Neural networks can adapt to changing input; so, the network generates the best possible result without needing to redesign the output criteria."