In the world of business and design, we have started using terms like "algorithm" and "machine learning" as magic calculations for problems we would like to gloss over. It sounds like an impressive algorithm, but the starting point was the manual and time-consuming process of reading news articles and creating a list of words and word pairs that seemed to define the issues we were looking for. Over the five-year life of the company, we manually identified more than 2,000 words and word pair terms for the ranking. This process lines up very well with Google's Human-Centered Machine Learning philosophy, which focuses on how people might solve the problem manually before resorting to algorithms to solve a problem.
The increasing number of satellites and advancements in climate models has improved the weather forecasting over the last many years. The UK Met Office and the National Weather Service's climate data archive contains 45 petabytes of information. The researchers have used AI systems to rank spot cyclones, climate models, and extreme weather events using modeled and real-climate data. Machine Learning, slowly but surely, seems to be gaining ground for weather forecast and climate change study.
They've developed a social sentiment technology based on deep learning that lets brands capture customer sentiment with 90% accuracy. This AI technology for the first time truly understands the meaning of full sentences and is able to accurately determine customer attitudes and contextual reactions in tweets, posts and articles. There are two main approaches most vendors use today: sentiment analysis based on keyword scoring, or a calculation based on predefined categories. For the first time, the algorithm understands the meaning of full sentences and is able to accurately determine customer attitudes and contextual reactions in tweets, posts and articles.
In this post I look at the popular gradient boosting algorithm XGBoost and show how to apply CUDA and parallel algorithms to greatly decrease training times in decision tree algorithms. This means that it takes a set of labelled training instances as input and builds a model that aims to correctly predict the label of each training example based on other non-label information that we know about the example (known as features of the instance). Figure 1 shows a simple decision tree model (I'll call it "Decision Tree 0") with two decision nodes and three leaves. This extension of the loss function adds penalty terms for adding new decision tree leaves to the model with penalty proportional to the size of the leaf weights.
Deep learning and IoT are two game-changing technologies that have the potential to revolutionize the stakes for oil and gas companies facing profitmaking pressure in the face of the dramatic drop in price of oil. Deep learning algorithms can automatically detect pixel signatures from drone footage for cracks and leaks that humans can miss, thereby minimizing infrastructure risk. While providing remote diagnostic services to industrial assets, the conventional form of interaction is through traditional dashboard communications. With the advent of natural language processing algorithms powered by deep learning, field technicians can interact with the asset diagnostic applications through voice interactions just as bots help in customer service.
By modeling human testers, including manual and test automation tasks such as scripting, Appvance has developed algorithms and expert systems to take on those tasks, similar to how driverless vehicle software models what a human driver does. The Appvance AI technology learns from various existing data sources, including learning to map an application fully on its own, various server logs, Splunk or Sumo Logic production data, form input data, valid headers and requests, expected responses, changes in each build and others. The resulting test execution represented real user flows, data driven, with near 100% code coverage. Built from the ground up with DevOps, agile and cloud services in mind, Appvance offers true beginning-to-end data-driven functional, performance, compatibility, security and synthetic APM test automation and execution, enabling dev and QA teams to quickly identify issues in a fraction of the time of other test automation products.
AI might apply an algorithm, or series of algorithms, to an artificial neural network to train itself for various tasks. Deep learning builds upon neural networks and machine learning techniques by applying deep networks with unsupervised learning. Because language is so complex, computers must carefully parse vocabulary, grammar and intent, while also allowing for variation in word choice when processing language, which is why programmers often take multiple AI approaches to NLP. Cognitive computing builds upon neural networks and deep learning to build a system that covers multiple disciplines, including machine learning, natural language processing, speech recognition and human-computer interaction.
Founded in 2007, Cortica has taken in total of funding of $69.4 million total funding to develop "the world's only unsupervised learning system capable of human level image understanding." Founded in 2012, Fortscale has taken in $39 million in total funding to develop User & Entity Behavioral Analytics (UEBA) which identifies "internal threats" to your business using machine learning algorithms. With $22 million in funding from investors that include Qualcomm and Cisco, Prospera has developed computer vision technologies that continuously monitor and analyze plant health, development and stress. We've recently written about more than 20 medical imaging startups, and one of those articles about "9 Artificial Intelligence Startups in Medical Imaging" featured Zebra Medical Vision which has taken in $20 million in funding so far and claims to have accumulated "one of the largest anonymized databases of medical imaging and clinical data available."
MIT's motto "Mens et Manus" (Latin for mind and hand) echoes our values here at IBM, to leverage the talent we have and create real technology with impact. Together with our fellow scientists at MIT, we selected four key pillars for our collaboration: core algorithmic advancements that enable learning and reasoning to broaden what AI systems can do, computational innovations tailored to AI and achieved through a mastery of physics, applications of AI to important domains like healthcare and cybersecurity, and achieving shared prosperity through AI technology. While a significant amount of the work we plan to do together is focused on achieving fundamental scientific breakthroughs, we are also extremely committed to leading in the application of AI to solve crucial problems in healthcare and security. Some of the questions we'll address together include the creation of AI systems that can detect and mitigate human biases, building trustworthiness and explainability into AI systems, ensuring that AI systems complement worker skills that might be in short supply and exploring how productivity gains will be distributed across firms, workers and consumers.
They're doing this with Mya, an intelligent chatbot that, much like a recruiter, interviews and evaluates job candidates. Since AI is dependent on a training set generated by a human team, it can promote bias rather than eliminating it, she adds. Grayevsky explains that Mya Systems "sets controls" over the kinds of data Mya uses to learn. This is why it's a possibility that rather than eliminating biases, AI HR tools might perpetuate them.