These news stories are selected automatically each day by NewsFinder from over eighty sources, including major newspapers and magazines. The news can be viewed in an RSS feed, in monthly and weekly calendars, or sent to your inbox each Monday morning by signing up with the AI-Alert.
When Google used 16,000 machines to build a simulated brain that could correctly identify cats in YouTube videos, it signaled a turning point in the art of artificial intelligence. Applying its massive cluster of computers to an emerging breed of AI algorithm known as "deep learning," the so-called Google brain was twice as accurate as any previous system in recognizing objects pictured in digital images, and it was hailed as another triumph for the mega data centers erected by the kings of the web.
An enormous gap exists between human abilities and machine performance when it comes to understanding the visual world from images and videos. Humans are still way out in front.
One of the best things about videogames these days is that you can play against your friends, even if they're not on the same continent as you. With the Forza racing series, Microsoft's Turn 10 Studios has taken that a step further: Gamers can race against their friends, even when their friends are offline.
In December, Amazon announced that it intended to deliver packages to customers using drones. But its "Amazon Prime Air" initiative, revealed on US current affairs show 60 Minutes, was widely ridiculed for being an over-hyped announcement with little to show for it.
Human beings have a remarkable ability to make inferences based on their surroundings. Is this area safe?
Jonathan Mosen, who has been blind since birth, spent his evening snapping photos of packages in the mail, his son's school report and labels on bottles in the fridge. In seconds, he was listening to audio of the printed words the camera captured, courtesy of a new app on his Apple Inc iPhone.
Researchers from the Korea Advanced Institute of Science and Technology (KAIST) have modified a small humanoid robot in order to monitor and control a simulated aircraft cockpit. The vision-guided robot is able to identify and use all of the buttons and controls within a cockpit of a normal light aircraft designed for a human pilot, according to IEEE Spectrum.
By 2020, our day-to-day lives, relationships and even what to have for dinner could be controlled and run by digital versions of ourselves. According to futurist John Smart, within the next six years many of us could have so-called 'digital twins' that schedule our appointments and even have conversations with others on our behalf.
The line between creativity and statistical analysis blurs the harder you look at it, and machines are looking hard: They will cross it eventually.
A Washington State University professor has figured out a dramatically easier and more cost-effective way to do research on science curriculum in the classroom -- and it could include playing video games. Called "computational modeling," it involves a computer "learning" student behavior and then "thinking" as students would.
Researchers at MIT and Northeastern University have equipped a robot with a novel tactile sensor that lets it grasp a USB cable draped freely over a hook and insert it into a USB port. The sensor is an adaptation of a technology called GelSight, which was developed by the lab of Edward Adelson, the John and Dorothy Wilson Professor of Vision Science at MIT, and first described in 2009.
When I arrived at a Stanford University auditorium Tuesday night for what I figured would be a pretty nerdy panel on deep learning, a fast-growing branch of artificial intelligence, I figured I must be in the wrong place-maybe a different event for all the new Stanford students and their parents visiting the campus. Nope.
Natural disasters and political unrest trigger torrents of tweets and posts--chaotic snippets of what could be valuable information. Patrick Meier, director of social innovation at the Qatar Computing Research Institute, applies artificial intelligence to this crowdsourced data, organizing digital photos and messages into dynamic maps that can guide real-world relief efforts.
In this photo taken Wednesday, May 14, 2014, a Google self-driving car goes on a test drive near the Computer History Museum in Mountain View, Calif. (Eric Risberg/AP Photo) LOS ANGELES -- Computer-driven cars have been testing their skills on California roads for more than four years -- but until now, the Department of Motor Vehicles wasn't sure just how many were rolling around. That changed Tuesday, when the agency required self-driving cars to be registered and issued testing permits that let three companies dispatch 29 vehicles onto freeways and into neighborhoods -- with a human behind the wheel in case the onboard equipment makes a bad decision.
Face recognition software measures various parameters in a mug shot, such as the distance between the person's eyes, the height from lip to top of their nose and various other metrics and then compares it with photos of people in the database that have been tagged with a given name. Now, research published in the International Journal of Computational Vision and Robotics looks to take that one step further in recognizing the emotion portrayed by a face.
Back in 2007, even before the iPhone was launched, giving us a powerful computer in our pockets or handbags, I started outlining a vision for Web 3.0. Tim Berners-Lee, a father of the World Wide Web, talks about the "Semantic Web," a way that computers employ the meaning of words -- not just pattern matching -- along with logical rules to connect independent nuggets of data and so create more context for information.
Over the last few years, researchers at MIT's Computer Science and Artificial Intelligence Lab (CSAIL) have developed biologically inspired robots designed to fly like falcons, perch like pigeons, and swim like swordfish. The natural next step?