Watson


How companies are Using AI in the Field of Patient Data Mining

#artificialintelligence

One of the ways AI is and will continue t be helpful in the field of healthcare is allowing medical professionals the ability to create treatment plans as well as discovering the best suited methods for helping their patients; instead of having to battle the tread-wheel of bureaucracy, nurses and physicians can focus on doing their actual jobs. Since we are in the age of big data, patient information is becoming valuable as tech giants, such as IBM and Google, are becoming more involved in acquiring this information; therefore, companies are using AI in the field known as patient data mining in a variety of ways. The AI research branch of the company recently launched a project known as Google Deepmind Health which focuses on mining medical records with the goal of providing faster and better health services; the project can go through hundreds of thousands of medical data within minutes. Also, Google's life sciences are working on a data-collecting initiative that aims to apply some of the same algorithms used to power Goggle's search button to analyze what it is that makes a person healthy. Included in this is experimenting with technologies that monitor diseases such as a digital contact lens that might detect levels of blood sugar.


Where Artificial Intelligence Is Now and What's Just Around the Corner

#artificialintelligence

Unexpected convergent consequences…this is what happens when eight different exponential technologies all explode onto the scene at once. This post (the second of seven) is a look at artificial intelligence. Future posts will look at other tech areas. An expert might be reasonably good at predicting the growth of a single exponential technology (e.g., the Internet of Things), but try to predict the future when A.I., robotics, VR, synthetic biology and computation are all doubling, morphing and recombining. You have a very exciting (read: unpredictable) future.


Global Bigdata Conference

#artificialintelligence

Since its creation, artificial intelligence (AI) has found use in many different industries, including healthcare. The amount of medical data is astronomically huge and the problem of systematizing, storing, and, above all, using such data is of the utmost importance. People have long hoped that someday, computers will make accurate diagnoses and eliminate medical errors. But no one has created an effective AI doctor yet. The Skychain project promises to revolutionize the healthcare industry, using AI and blockchain technology.


Future of AI revenue: Top 10 uses cases for next decade

#artificialintelligence

Artificial intelligence already impacts many aspects of our daily lives at work, at home, and as we move about. Over the next decade, analyst firm Tractica predicts that annual Global AI enterprise software revenue will grow from $644 million in 2016 to nearly $39 billion by 2025, and services related revenue should reach almost $150 billion. These functional areas are applicable to many use cases, industries, and generate benefits for both businesses and individuals. Here are the top ten use cases which will reap financial rewards for AI technology product and service companies, and a broad spectrum of benefits for everyone else. Self driving cars and other autonomous vehicles are consistently called the "next revolution" in transportation, technology, and some say in civilization in general.


If I Only Had a Brain: How AI 'Thinks'

#artificialintelligence

Artificial intelligence has gotten pretty darn smart--at least, at certain tasks. AI has defeated world champions in chess, Go, and now poker. But can artificial intelligence actually think? The answer is complicated, largely because intelligence is complicated. One can be book-smart, street-smart, emotionally gifted, wise, rational, or experienced; it's rare and difficult to be intelligent in all of these ways.


10 Companies Using Machine Learning in Cool Ways

#artificialintelligence

If science-fiction movies have taught us anything, it's that the future is a bleak and terrifying dystopia ruled by murderous sentient robots. Fortunately, only one of these things is true – but that could soon change, as the doomsayers are so fond of telling us. Artificial intelligence and machine learning are among the most significant technological developments in recent history. Few fields promise to "disrupt" (to borrow a favored term) life as we know it quite like machine learning, but many of the applications of machine learning technology go unseen. Want to see some real examples of machine learning in action?


AI Is Learning How To Make You Cry At The Movies Ventured

#artificialintelligence

New research can predict how plots, images, and music affect your emotions while watching a movie. Every time I think AI can't surprise me anymore, new research arrives to prove me wrong. Yesterday, scientists at the MIT Media Lab announced that they've taught a machine how to manipulate our emotions–a technology that they believe can help filmmakers create more engrossing movies and TV. In a blog post published in collaboration with strategic consulting firm McKinsey & Company, the researchers said that they used a deep neural network to watch thousands of small slices of video--movies, TV, and short online features. For each slice, the neural network guessed which were the different elements that made a moment emotionally special, constructing an emotional arc.


Teaching Self-Learning Machines to Forget

#artificialintelligence

Many tasks in which humans excel are extremely difficult for robots and computers to perform. Especially challenging are decision-making tasks that are non-deterministic and, to use human terms, are based on experience and intuition rather than on predetermined algorithmic response. A good example of a task that is difficult to formalize and encode using procedural programming is image recognition and classification. For instance, teaching a computer to recognize that the animal in a picture is a cat is difficult to accomplish using traditional programming. Artificial intelligence (AI) and, in particular, machine learning technologies, which date back to the 1950s, use a different approach.


Tone down your AI expectations - Enterprise Irregulars

#artificialintelligence

We have had many previous hype cycles around AI. As I wrote in Silicon Collar: "Since the 1950s! That is when Alan Turing defined his famous test to measure a machine's ability to exhibit intelligent behavior equivalent to that of a human. In 1959, we got excited when Allen Newell and his colleagues coded the General Problem Solver. In 1968, Stanley Kubrick sent our minds into overdrive with HAL in his movie, 2001: A Space Odyssey.


AI Is Learning How To Make You Cry At The Movies

#artificialintelligence

In a blog post published in collaboration with strategic consulting firm McKinsey & Company, the researchers said that they used a deep neural network to watch thousands of small slices of video--movies, TV, and short online features. For each slice, the neural network guessed which were the different elements that made a moment emotionally special, constructing an emotional arc. To test their accuracy, the team got human volunteers to watch the same clips, tagging their reaction and labeling which elements--from the music to the dialogue to the type of imagery shown on screen–had a stronger weight in their emotional response. This information helped them fine-tune the resulting model until it got really accurate at guessing what triggers human emotions. So what does a machine learning model see when it watches the opening act of Pixar's Up!?