If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Microsoft AI chief Harry Shum says the company is fulfilling Bill Gates' vision of a computer that can understand us. If you ask Google "Is Hamilton a good musical?" it will send back a link to Quora, the question-and-answer service, where people ask that same question. The next link, a story published in Slate last year, is an interview with a critic who argues why the Pulitzer-, Grammy- and Tony-winning musical isn't revolutionary (their pun, not ours). Microsoft thinks it can do better. Beginning Wednesday, the company will start giving you more nuanced answers, powered by artificial intelligence software designed to identify different viewpoints.
I recently sat down with Bob Rogers. Bob is Intel's Chief Data Scientist for Analytics and AI. I sought out answers to the some of the most popular questions related to artificial and augmented intelligence. The entire interview was illuminating, and shed some light on aspects that many people wouldn't know about. So what is AI? Artificial intelligence is human-like intelligence that works in a similar way to our brains--though not quite, of course.
Where is Deep Learning applicable? This is one of the more fleeting ideas to understand about Deep Learning and related A.I. technologies. It is all too easy to fall in the trap that a "Artificial Intelligence" application can solve your problem. The usual coverage of this problem involves the question of "do you have enough data?" Unfortunately, that is too vague in that to answer this you have to at least understand your problem domain.
Facebook has said that efforts to use artificial intelligence and other automated techniques to delete terrorism-related posts are "bearing fruit" but more work is needed. The firm said that 99% of the material it now removes about Al Qaeda and so-called Islamic State is first detected by itself rather than its users. But it acknowledged that it had to do more work to identify other groups. He said at the time that it would take "many years" to fully develop the required systems. Facebook relies on a mix of human checkers and software to confirm which posts should be removed, but it said that the task was now "primarily" being carried out by its automated systems.
If you haven't lost your job to a computer yet, you probably will. Experts predict that robots will be folding laundry for us in the next five years, driving trucks in the next 10, and performing surgery in the next 40. And, they predict, they'll be doing it better than humans. This could lead to a massive shift in our economy, setting off an "era of mass joblessness and mass poverty," as Mother Jones' Kevin Drum recently reported. But what if technology being able to perform tasks better than humans also meant we'd be saving more lives?
There can be plenty of copies of the same video clips as a GIF, or maybe it's just difficult to capture and upload, but Gfycat hopes that it can be solved at a technical level. Gfycat is now making a big push on the technical front to make those GIFs look better and more discoverable as creators look to continue to upload content, regardless of what kind of quality or fidelity they are. And it's more of a video problem than an image recognition problem, CEO Richard Rabbat said. "We have scaled [through] creators through word of mouth, and they are just getting excited about Gfycat and [creating] content," Rabbat said. "In many cases, what we're building from an AI and machine learning perspective are additional tools to support their excitement.
For many years, the American public thought of artificial intelligence, or AI, as some big, incredibly complex computer program that would one day achieve sentience and turn on humanity like a James Cameron conceived summer blockbuster. While movie franchises starring angry AI creations turning on their creators continue to do well at the box office, artificial intelligence has made its way into the modern American household. From algorithms on Amazon that suggest what we might want to read next to Siri and Alexa answering our questions to cars that understand traffic patterns and very soon might be driving us to work, artificial intelligence is rapidly disrupting industries. While there has been a great fear for a number of years that robots will replace workers across industries, this has largely not been born out. While robots powered by AI can often replace workers at some of the most menial, automated tasks in a factory, workers are often needed in more specialized capacities to repair, maintain, and respond to alarms that these robotic workers generate.
When something goes wrong with the appliances in your home, what do you do to fix them? Then you dig out a manual or look one up online. If none of that helps, you might call the company who made the thing, and then spend an age on the phone trying to explain what's gone wrong. But what if you didn't have to explain -- what if you could just show someone the problem, and have it explained to you? That's the proposition from Israeli company TechSee, which is building a customer support platform using two of 2017's most overused buzzwords: augmented reality and artificial intelligence.
One of the toughest aspects of having epilepsy is not knowing when the next seizure will strike. A wearable warning system that detects pre-seizure brain activity and alerts people of its onset could alleviate some of that stress and make the disorder more manageable. To that end, IBM researchers say they have developed a portable chip that can do the job; they described their invention today in the Lancet's open access journal eBioMedicine. The scientists built the system on a mountain of brainwave data collected from epilepsy patients. The dataset, reported by a separate group in 2013, included over 16 years of continuous electroencephalography (EEG) recordings of brain activity, and thousands of seizures, from patients who had had electrodes surgically implanted in their brains.
Yitu Technology, based in Shanghai, China has developed and employed an artificial intelligence (A.I.) algorithm called Dragonfly Eye that uses facial recognition technology capable of identifying 2 billion people in seconds. Zhu Long, CEO of Yitu Technologies, told the South China Morning Post, "Our machines can very easily recognise you among at least 2 billion people in a matter of seconds, which would have been unbelievable just three years ago." Dragonfly Eye is presently used by 150 municipal public security systems and 20 provincial public security departments across the country of China. Dragonfly Eye was initially employed on the Shanghai Metro in Shanghai, China, during January of this year. Local police authorities credit Dragonfly Eye with aiding in the arrest of 576 suspects on the Shanghai Metro in the first three months of using the facial recognition system.