Deep Learning


DeepMind AI can predict kidney illness 48 hours before it occurs

#artificialintelligence

DeepMind also had its mobile assistant for clinicians, known as Streams, evaluated by researchers at University College London. The results show that, through the app, specialists reviewed urgent cases within 15 minutes, as opposed to several hours. And only 3.3 percent of AKI cases were missed, compared to 12.4 percent without the app. Streams also led to health care cost savings. Combined with DeepMind's new AKI-detecting algorithm, Streams could offer improved early detection.


Using deep learning to "read your thoughts" -- with Keras and EEG

#artificialintelligence

When saying a word in your mind, your brain does not fully decouple the process of "sub-vocalizing" that word from speaking it, which can result in either minor or imperceptible movements of the mouth, tongue, larynx or other facial muscles.* The act of activating a muscle is not just a single "command" as we'd imagine in the digital world, but involves the repeated firing of multiple motor units (collections of muscle fibers and neuron terminals), at a rate of somewhere between 7–20 Hz, depending on the size and structure of the muscle. These firings will be providing us the electrical signal we are looking for, which we can read using an EMG sensor. To read the signals I used an OpenBCI board, technically designed for EEG, which I had on hand from some previous biofeedback experiments. EEG typically requires higher resolution, so if anything, this should help in picking up the weaker EMG signals we are looking for.


Global Artificial Intelligence in Retail Market

#artificialintelligence

Global Artificial Intelligence in Retail Market was valued US$993.6 Mn in 2017 and is expected to reach US$8314 Mn by 2026, at a CAGR of 30.41% during a forecast period. The report is majorly segmented into types, technologies, solutions, services, deployment modes, applications, and region. Further, Artificial Intelligence in a retail market based on type includes online and offline retail. Technology segment is sub-segmented into machine learning and deep learning, Natural Language Processing, and others. Solution segment in the report comprises product recommendation & planning, customer relationship management, visual search, virtual assistant, price optimization, payment services management, supply chain management & demand planning, and others which include website and content optimization, space planning, and fraud detection.


Top 10 Limitations of Artificial Intelligence and Deep Learning - Amit Ray

#artificialintelligence

Artificial Intelligence (AI) has provided remarkable capabilities and advances in image understanding, voice recognition, face recognition, pattern recognition, natural language processing, game planning, military applications, financial modeling, language translation, and search engine optimization. In medicine, deep learning is now one of the most powerful and promising tool of AI, which can enhance every stage of patient care --from research, omics data integration, combating antibiotic resistance bacteria, drug design and discovery to diagnosis and selection of appropriate therapy. It is also the key technology behind self-driving car. However, deep learning algorithms of AI have several inbuilt limitations. To utilize the full power of artificial intelligence, we need to know its strength and weakness and the ways to overcome those limitations in near future.


One model to learn them all

#artificialintelligence

I recently stumbled upon this paper by the Google Brain team called "One model to learn them all". I just had to open it and take a look at the paper because of its great title (best ML paper title ever?). I quickly discovered that it is about a very fascinating and interesting idea. In this article I want to quickly summarize what I personally found the most interesting. Can we create a unified deep learning model to solve tasks across multiple domains?


One model to learn them all

#artificialintelligence

I recently stumbled upon this paper by the Google Brain team called "One model to learn them all". I just had to open it and take a look at the paper because of its great title (best ML paper title ever?). I quickly discovered that it is about a very fascinating and interesting idea. In this article I want to quickly summarize what I personally found the most interesting. Can we create a unified deep learning model to solve tasks across multiple domains?


The best Machine & Deep Learning books

#artificialintelligence

The #1 book that got the most votes is "Understanding Machine Learning: From Theory to Algorithms" by Shai Shalev-Shwartz and Shai Ben-David. The book was first published in 2014 by Cambridge University aiming for students who want to learn the basics of Machine Learning and be familiar with all the important algorithms in this field.


MGCodesandStats

#artificialintelligence

Could you imagine a future where computers made economic decisions rather than governments and central bankers? With all of the economic mishaps we've been seeing over the past decade, one could say it isn't a particularly bad idea! Natural language processing could allow us to make more sense of the economy than we do currently. As it stands, investors and policymakers use index benchmarks and quantitative measures such as GDP growth to gauge economic health. That said, one potential application of NLP is to analyse text data (such as through major economic policy documents), and then "learn" from such texts in order to generate appropriate economic policies independently of human intervention.


DeepMicroNet: Machine Learning and Microwaves for Estimating Tropical Cyclone Intensity #ExtremeWeather #Hurricane #TropicalCyclone #Microwaves #MachineLearning #ArtificialIntelligence #DeepLearning @UWCIMSS

#artificialintelligence

A deep learning convolutional neural network model is used to explore the possibilities of estimating tropical cyclone (TC) intensity from satellite images in the 37- and 85–92-GHz bands. The model, called "DeepMicroNet," has unique properties such as a probabilistic output, the ability to operate from partial scans, and resiliency to imprecise TC center fixes. The 85–92-GHz band is the more influential data source in the model, with 37 GHz adding a marginal benefit. Training the model on global best track intensities produces model estimates precise enough to replicate known best track intensity biases when compared to aircraft reconnaissance observations. Model root-mean-square error (RMSE) is 14.3 kt (1 kt 0.5144 m s 1) compared to two years of independent best track records, but this improves to an RMSE of 10.6 kt when compared to the higher-standard aircraft reconnaissance-aided best track dataset, and to 9.6 kt compared to the reconnaissance-aided best track when using the higher-resolution TRMM TMI and Aqua AMSR-E microwave observations only. A shortage of training and independent testing data for category 5 TCs leaves the results at this intensity range inconclusive. Based on this initial study, the application of deep learning to TC intensity analysis holds tremendous promise for further development with more advanced methodologies and expanded training datasets. If you would like to learn more about this work check out the publication titled, "Using Deep Learning to Estimate Tropical Cyclone Intensity from Satellite Passive Microwave Imagery". If you would like to learn more about models for predicting tropical storms, checkout this presentation by NASA titled, "Tropical Cyclone Intensity Estimation Using Deep Convolutional Neural Networks".


HPE Accelerates Artificial Intelligence Innovation with Enterprise-Grade Solution for Managing Entire Machine Learning Lifecycle

#artificialintelligence

Hewlett Packard Enterprise (HPE) today announced a container-based software solution, HPE ML Ops, to support the entire machine learning model lifecycle for on-premises, public cloud and hybrid cloud environments. The new solution introduces a DevOps-like process to standardize machine learning workflows and accelerate AI deployments from months to days. The new HPE ML Ops solution extends the capabilities of the BlueData EPIC container software platform, providing data science teams with on-demand access to containerized environments for distributed AI / ML and analytics. BlueData was acquired by HPE in November 2018 to bolster its AI, analytics, and container offerings, and complements HPE's Hybrid IT solutions and HPE Pointnext Services for enterprise AI deployments. Enterprise AI adoption has more than doubled in the last four years1, and organizations continue to invest significant time and resources in building machine learning and deep learning models for a wide range of AI use cases such as fraud detection, personalized medicine, and predictive customer analytics.