In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans. But true artificial intelligence, as McCarthy conceived it, continues to elude us.
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. One of the most influential technologies of the past decade is artificial neural networks, the fundamental piece of deep learning algorithms, the bleeding edge of artificial intelligence. You can thank neural networks for many of applications you use every day, such as Google's translation service, Apple's Face ID iPhone lock and Amazon's Alexa AI-powered assistant. Neural networks are also behind some of the important artificial intelligence breakthroughs in other fields, such as diagnosing skin and breast cancer, and giving eyes to self-driving cars. The concept and science behind artificial neural networks have existed for many decades.
Machine learning is enabling computers to tackle tasks that have, until now, only been carried out by people. The next wave of IT innovation will be powered by artificial intelligence and machine learning. We look at the ways companies can take advantage of it and how to get started. From driving cars to translating speech, machine learning is driving an explosion in the capabilities of artificial intelligence -- helping software make sense of the messy and unpredictable real world. But what exactly is machine learning and what is making the current boom in machine learning possible? At a very high level, machine learning is the process of teaching a computer system how to make accurate predictions when fed data.
Radiology plays a major role in the diagnosis and treatment of various diseases. Deep-learning, also known as hierarchical learning, is a type of machine learning involving algorithms and based on learning data representations. Deep-learning is used in the field of medicine, particularly in radiology. Deep-learning is a type of machine learning method that helps machines and computers learn by example. In deep-learning, the machine or computer learns about classification tasks directly from sound, text, or image input.
Unlike with traditional software, we don't always have an exact idea of how AI works. And in numerous scenarios, the opacity of deep-learning algorithms has caused larger troubles. In 2017, a Palestinian construction worker in the West Bank settlement of Beiter Illit, Jerusalem, posted a picture of himself on Facebook in which he was leaning against a bulldozer. Shortly after, Israeli police arrested him on suspicions that he was planning an attack, because the caption of his post read "attack them." The real caption of the post was "good morning" in Arabic. But for some unknown reason, Facebook's artificial intelligence–powered translation service translated the text to "hurt them" in English or "attack them" in Hebrew. The Israeli Defense Force uses Facebook's automated translation to monitor the accounts of Palestinian users for possible threats.