neural network


Gartner Says Global Artificial Intelligence Business Value to Reach $1.2 Trillion in 2018

#artificialintelligence

Global business value derived from artificial intelligence (AI) is projected to total $1.2 trillion in 2018, an increase of 70 percent from 2017, according to Gartner, Inc. AI-derived business value is forecast to reach $3.9 trillion in 2022. The Gartner AI-derived business value forecast assesses the total business value of AI across all the enterprise vertical sectors covered by Gartner. There are three different sources of AI business value: customer experience, new revenue, and cost reduction. "AI promises to be the most disruptive class of technologies during the next 10 years due to advances in computational power, volume, velocity and variety of data, as well as advances in deep neural networks (DNNs)," said John-David Lovelock, research vice president at Gartner. "One of the biggest aggregate sources for AI-enhanced products and services acquired by enterprises between 2017 and 2022 will be niche solutions that address one need very well.


Lagging In AI? Don't Worry, It's Still Early

#artificialintelligence

Without splitting a lot of hairs on definitions, it is safe to say that machine learning in its myriad forms is absolutely shaking up data processing. The techniques for training neural networks to chew through mountains of labeled data and make inferences against new data are set to transform every aspect of computation and automation. There is a mad dash to do something, as there always is at the beginning of every technology hype cycle. The hyperscalers are perfecting these technologies, which are changing fast, and by the time things settle out and the software stacks mature, then will be the time to act. That is the main idea that can be derived from a survey that OrionX.net


Future-proofing the public sector for AI innovation

#artificialintelligence

Editor's Note: This piece was written by Gary Newgaard, Vice President, Public Sector at Pure Storage. The opinions represented in this piece are independent of Smart Cities Dive's views. Ask average citizens about their biggest frustrations in dealing with government organizations and you're likely to conjure up at least a few stories of never-ending lines at the Department of Motor Vehicles (DMV). Bureaucracy and manual processes have, fairly or not, become synonymous with the business of government. They upset constituents, and chances are they don't help government workers get their jobs done, either.


Nvidia reveals an incredible AI that can reconstruct badly-damaged photos with remarkable accuracy

Daily Mail

Photoshop could become a thing of the past thanks to new technology that can touch-up badly damaged photos. The Nvidia software uses AI and deep-learning algorithms to predict what a missing portion of a picture should look like and recreate it with incredible accuracy. As well as restoring old physical photos that have been damaged, the technique could also be used to fix corrupted pixels or bad edits made to digital files. Graphics specialist Nvidia, based in Santa Clara, California trained its neural network using a variety of irregular shaped holes in images. The system then determined what was missing from each and filled in the gaps.


AI, machine learning and the reasoning machine with Dr. Geoff Gordon - Microsoft Research

#artificialintelligence

Teaching computers to read, think and communicate like humans is a daunting task, but it's one that Dr. Geoff Gordon embraces with enthusiasm and optimism. Moving from an academic role at Carnegie Mellon University, to a new role as Research Director of the Microsoft Research Lab in Montreal, Dr. Gordon embodies the current trend toward partnership between academia and industry as we enter what many believe will be a new era of progress in machine learning and artificial intelligence. Today, Dr. Gordon gives us a brief history of AI, including his assessment of why we might see a break in the weather-pattern of AI winters, talks about how collaboration is essential to innovation in machine learning, shares his vision of the mindset it takes to tackle the biggest questions in AI, and reveals his life-long quest to make computers less… well, less computer-like. Geoff Gordon: You cannot know ahead of time exactly what's going to come out, because if you knew, it wouldn't be research. You don't expect your payoffs to be measured in months or even necessarily a couple of years. But it could be that the things you're doing now pay off ten years later. And so, Microsoft has decided that MSR is in it for the long-term, and that changes the type of research that you can do, right? You can afford to make big bets. Host: You're listening to the Microsoft Research Podcast, a show that brings you closer to the cutting-edge of technology research and the scientists behind it. Teaching computers to read, think and communicate like humans is a daunting task, but it's one that Dr. Geoff Gordon embraces with enthusiasm and optimism. Moving from an academic role at Carnegie Mellon University to a new role as research director of the Microsoft Research Lab in Montreal, Dr. Gordon embodies the current trend toward the partnership between academia and industry, as we enter what many believe will be a new era of progress in machine learning and artificial intelligence. Today, Dr. Gordon gives us a brief history of AI, including his assessment of why we might see a break in the weather pattern of AI winters, talks about how collaboration is essential to innovation and machine learning, shares his vision of the mindset it takes to tackle the biggest questions in AI, and reveals his life-long quest to make computers less… well, less computer-like. Host: Geoff Gordon, thanks for coming all the way from Montreal to join us in the studio today.


Using AI For Predictive Investment Analysis

#artificialintelligence

As artificial intelligence becomes more advanced, machines will be able to more quickly and effectively comb the massive quantities of market data to identify patterns and other investment intelligence that traders can use to outperform the market. AI platform TrueRisk uses machine learning, predictive analytics and big data coupled with its proprietary artificial neural networks to look for trading signals in more than 4,000 U.S. stocks. TrueRisk's algorithms make predictions that are non-correlated with standard industry metrics and can provide traders with an alternative view of where the market is headed.


Deep learning predicts drug-drug and drug-food interactions: Development of a deep learning-based computational framework that predicts interactions for drug-drug or drug-food constituent pairs

#artificialintelligence

Drug interactions, including drug-drug interactions (DDIs) and drug-food constituent interactions (DFIs), can trigger unexpected pharmacological effects, including adverse drug events (ADEs), with causal mechanisms often unknown. However, current prediction methods do not provide sufficient details beyond the chance of DDI occurrence, or require detailed drug information often unavailable for DDI prediction. To tackle this problem, Dr. Jae Yong Ryu, Assistant Professor Hyun Uk Kim and Distinguished Professor Sang Yup Lee, all from the Department of Chemical and Biomolecular Engineering at Korea Advanced Institute of Science and Technology (KAIST), developed a computational framework, named DeepDDI, that accurately predicts 86 DDI types for a given drug pair. The research results were published online in Proceedings of the National Academy of Sciences of the United States of America (PNAS) on April 16, 2018, which is entitled "Deep learning improves prediction of drug-drug and drug-food interactions." DeepDDI takes structural information and names of two drugs in pair as inputs, and predicts relevant DDI types for the input drug pair.


Deep Learning for Traffic Signs Recognition – Becoming Human: Artificial Intelligence Magazine

#artificialintelligence

Code for this project can be found on: Github. This article can also be found on my website here. As part of completing the second project of Udacity's Self-Driving Car Engineer online course, I had to implement and train a deep neural network to identify German traffic signs. In total, the dataset used consisted of 51,839 RGB images with dimensions 32x32, and is publicly accessible on this website. A validation set was used to assess how well the model is performing.


Machine Learning Invades Embedded Applications

#artificialintelligence

Two things have moved deep-neural-network-based (DNN) machine learning (ML) from research to mainstream. The first is improved computing power, especially general-purpose GPU (GPGPU) improvements. The second is wider distribution of ML software, especially open-source software. Quite a few applications are driving adoption of ML, including advanced driver-assistance systems (ADAS) and self-driving cars, big-data analysis, surveillance, and improving processes from audio noise reduction to natural language processing. Many of these applications utilize arrays of GPGPUs and special ML hardware, especially for handling training that uses large amounts of data to create models that require significantly less processing power to perform a range of recognition and other ML-related tasks.


Deep Learning: A Next-Generation Big-Data Approach for Hydrology - Eos

#artificialintelligence

In popular culture, Artificial Intelligence (AI) often refers to machines that can perform any intellectual task that humans can. Such machines are heavily romanticized and are still very far from becoming a reality. However, weak (or narrow) AIs, algorithms that are designed to perform a specific task, have shown a formidable intellectual prowess that surpasses human capabilities in certain tasks. These machines must have integrative decision-making capability based on what they receive and what they predict would happen. Take, for example, AlphaGo, the AI that famously defeated world champions at the ancient game "Go."