Goto

Collaborating Authors

analytic model


What We Can Learn about AI and Creating Smart Products from "The Incredibles"

#artificialintelligence

Nothing strikes terror into the hearts of humans more than the idea of an intelligent robot gone bad. The fear is that a robot can acquire the ability to learn and adapt to the point of superseding their human creators…and with evil intentions. From Gort ("The Day the Earth Stood Still") to Sonny ("I, Robot"), films provide a wide variety of potential robot scenarios. Only a few of these film robots have demonstrated artificial intelligence to the point where they have threatened humankind (like the Cyberdyne Systems series T-800 Model 101 in "Terminator" and the unnamed NS-5 robots in "I, Robot"). However the one evil robot that demonstrated its ability to continuously learn through experimentation and failure would be theOmnidroidfrom the "The Incredibles".


Build an IoT hub for streaming, storing, and analyzing sensor data in the cloud: Connect an Android device to the IBM Cloud, build a Node-RED dashboard, and build an AI classifier

#artificialintelligence

In this tutorial, we present the high-level steps that are involved in connecting an Android device to the cloud and developing analytics models to analyze sensor data. By the end of this tutorial you should be able to set up your own IoT hub for streaming, storing and processing device data. The following figure shows the architecture of our sample app. This tutorial requires an Android device (smartphone), an internet connection, and an IBM Cloud account. In Step 1 you will create an account on IBM Cloud and install an application on your Android phone.


Navigate Turbulence with the Resilience of Responsible AI - InformationWeek

#artificialintelligence

The COVID-19 pandemic has caused data scientists and business leaders alike to scramble, looking for answers to urgent questions about the analytic models they rely on. Financial institutions, companies and the customers they serve are all grappling with unprecedented conditions, and a loss of control that may seem best remedied with completely new decision strategies. If your company is contemplating a rush to crank out brand-new analytic models to guide decisions in this extraordinary environment, wait a moment. Look carefully at your existing models, first. Existing models that have been built responsibly -- incorporating artificial intelligence (AI) and machine learning (ML) techniques that are robust, explainable, ethical, and efficient -- have the resilience to be leveraged and trusted in today's turbulent environment.


Leaving Money on the Table and the Economics of Composable, Reusable Analytic Modules

#artificialintelligence

When I was the Vice President of Advertiser Analytics at Yahoo, this became a key focus guiding the analytics that we were delivering to advertisers to help them optimize their spend across the Yahoo ad network. Advertisers had significant untapped advertising and marketing spend into which we were not tapping because we could not deliver audience, content and campaign insights to help them spend that money with us. And the MOTT was huge. Now here I am again, and I'm again noticing this massive "Money on the Table" (MOTT) economic opportunity across all companies – orphaned analytics. Orphaned Analytics are one-off analytics developed to address a specific use case but never "operationalized" or packaged for re-use across other organizational use cases.


Enhancing data analytics with machine learning and AI

#artificialintelligence

How are some of the world's largest data analytics providers utilising machine learning to enhance their offerings? Recent research has shown that companies which use analytics for decision making are 6% more profitable than those that don't. Harnessing analytics within business operations can benefit companies in a number of ways, including the capacity to be proactive and anticipate needs, mitigate risks, increase product quality and personalisation and optimise the customer experience. As a result of these benefits, the technology industry has seen giants such as Microsoft, Amazon and IBM ramp up their investments in Big Data with the sector expected to reach over US$273mn in value by 2023. What is machine learning and how can it be applied to data analytics?


Predicting Tech Trends in Education is Hard, Especially about the Future

#artificialintelligence

In the last few months, two "predictive" documents found their way into our hands. The first one is the 2016 NMC[1]/CoSN[2] Horizon report for elementary and secondary education and the second is the SURF Trend report 2016: How technological trends enable customised education. Both are very interesting and well-written reports. However they're also a bit tricky in that they're not really underpinned by concrete evidence from the educational sciences and therefore, their predictions are in our opinion a bit like reading tea leaves: They're very visible, but what do they mean? As a preamble to discussing the SURF Trend report 2016 an aside to frame some background.


4 Machine Learning Use Cases in the Automotive Sector - Anaconda

#artificialintelligence

From parts suppliers to vehicle manufacturers, service providers to rental car companies, the automotive and related mobility industries stand to gain significantly from implementing machine learning at scale. We see the big automakers investing in proof-of-concept projects at various stages, while disruptors in the field of autonomous driving are trying to build entirely new businesses on a foundation of artificial intelligence and machine learning. There are huge opportunities for machine learning to improve both processes and products all along the automotive value chain. But where do you focus? And how can you make sure your investments in machine learning aren't just expensive, "one-and-done" applications?


Machine Learning and Real-Time Analytics in Apache Kafka Applications

#artificialintelligence

The relationship between Apache Kafka and machine learning (ML) is an interesting one that I've written about quite a bit in How to Build and Deploy Scalable Machine Learning in Production with Apache Kafka and Using Apache Kafka to Drive Cutting-Edge Machine Learning. This blog post addresses a specific part of building a machine learning infrastructure: the deployment of an analytic model in a Kafka application for real-time predictions. Model training and model deployment can be two separate processes. However, you can also use many of the same steps for integration and data preprocessing because you often need to perform the same integration, filter, enrichment, and aggregation of data for model training and model inference. We will discuss and compare two different options for model deployment: model servers with remote procedure calls (RPCs), and natively embedding models into Kafka client applications.


What is Explainable AI and Why is it Needed?

#artificialintelligence

Imagine an advanced fighter aircraft is patrolling a hostile conflict area and a bogie suddenly appears on radar accelerating aggressively at them. The pilot, with the assistance of an Artificial Intelligence co-pilot, has a fraction of a second to decide what action to take – ignore, avoid, flee, bluff, or attack. The costs associated with False Positive and False Negative are substantial – a wrong decision that could potentially provoke a war or lead to the death of the pilot. What is one to do…and why? No one less than the Defense Advanced Research Projects Agency (DARPA) and the Department of Defense (DoD) are interested in not only applying AI to decide what to do in hostile, unstable and rapidly devolving environments but also want to understand why an AI model recommended a particular action.


Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning

arXiv.org Machine Learning

Deep learning has achieved astonishing results on many tasks with large amounts of data and generalization within the proximity of training data. For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems. In particular, learning physics models for model-based control requires robust extrapolation from fewer samples - often collected online in real-time - and model errors may lead to drastic damages of the system. Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples. As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed. DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility. The resulting DeLaN network performs very well at robot tracking control. The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time.