If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
A team of researchers from the University of Albany have developed a method of combating Deepfake videos, using machine learning techniques to search videos for digital "fingerprints" left behind when a video has been altered. One of the biggest concerns in the tech world over the past couple of years has been the rise of Deepfakes. Deepfakes are a type of fake video constructed by artificial intelligence algorithms run through deep neural networks, and the products of the deepfake technology are shockingly good – sometimes difficult to tell apart from a real, genuine video. AI researchers, ethicists, and political scientists are worried that the Deepfake technology will eventually be used to impact political elections, disseminating misinformation in a form more convincing than a fake news story. In order to provide some defense against the manipulation and misinformation that Deepfakes can cause, researchers from the University of Albany have created tools to assist in the detection of fake videos.
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. With all the hype about SpotMini recently, it's a good time to take a look back at another quadruped that Boston Dynamics helped develop. This system is the first of its kind that can automatically keep a cluttered room neat and tidy at a practical level, something that has been difficult to achieve using conventional robot system.
It was 2014, around the time when Travis Kalanick referred to Uber as his chick-magnet "Boober" in a GQ article, that I'd realized congestion in San Francisco had gone insane. Before there was Uber, getting across town took about ten minutes by car and there was nowhere to park, ever. With Boober in play, there was parking in places there never were spaces, but the streets were so jammed with empty, one-person "gig economy" cars circling, sitting in bus zones, mowing down bicyclists whilst fussing with their phones, still endlessly going nowhere, alone, that walking across the city was faster. To be fair, you wouldn't know there were 5,700 more vehicles a day on our roads if you'd just moved here. Nor if you were pouring Uber-delivered champagne over yourself in a tub of stock options while complaining about San Francisco's homeless from the comfort of your company-rental Airbnb where artists or Mexican families once lived.
The Papago GoSafe S810 camera duo has more "safety" features than you can shake a stick at, including one I'd never even considered--stop sign recognition. It recognizes stop signs and pops the digital equivalent up on its display. Kind of fun, but as I'm wont to say: If you need this stuff, call a cab or wait for self-driving vehicles. Admonishment aside, the $170 S810 is more than just fancy features. It takes very, very good day and night video, and the rear camera, unlike some we've seen recently, actually captures enough detail to be useful.
Apple has quietly bought Spektral, a Danish machine learning startup that specializes in real-time green screen technology. The $30 million deal actually happened last year, but it was reported today by Danish newspaper Børsen. Apple has been focusing more and more on its AR capabilities lately, and this latest acquisition may be meant to boost the iPhone's AR features for Memoji or FaceTime or as a part of its plans for an augmented reality headset, which Bloomberg reported may be coming in 2020. Spektral, which previously went by the name CloudCutout, uses machine learning and computer vision techniques to "cut out" people from video backgrounds in real time on smartphones. "Combining deep neural networks and spectral graph theory with the computing power of modern GPUs, our engine can process images and video from the camera in real-time (60 fps) directly on the device," the company explained on its website.
According to the International Federation of Robotics, in February 2018, the average global robot density was 74 robot units per 10,000 employees, up from 66 in 2015. As well as increasing in popularity, robots are also performing more complex and surprising tasks. To keep you in the loop, here are three robot updates from October so far. By Leah Elston-Thompson, senior account executive at Stone Junction Last week, the news broke that Pepper the robot will be giving evidence in Parliament, marking the first use of a non-human witness. The Commons Education Select Committee has invited Pepper to answer questions about artificial intelligence (AI) in the labour market.
When I talk to people about machine learning on phones and devices I often get asked "What's the killer application?". I have a lot of different answers, everything from voice interfaces to entirely new ways of using sensor data, but the one I'm most excited about in the near-team is compression. Despite being fairly well-known in the research community, this seems to surprise a lot of people, so I wanted to share some of my personal thoughts on why I see compression as so promising. I was reminded of this whole area when I came across an OSDI paper on "Neural Adaptive Content-aware Internet Video Delivery". The summary is that by using neural networks they're able to improve a quality-of-experience metric by 43% if they keep the bandwidth the same, or alternatively reduce the bandwidth by 17% while preserving the perceived quality.
This article was written by a human being who click-clacked on a keyboard until she finished a draft and sent it to an editor. But more and more, computers are taking over. In fact, the Associated Press has used "automation technology" to cover college sports since 2015. The idea isn't new--humans have obsessed over artificial intelligence (AI) since at least the 18th century, when the "Mechanical Turk" hoax led many to believe that a machine could play chess against a person and win. About 250 years later, a machine can play chess against a person and win--every time.
Telepresence robots from Vecna Technologies can be hacked using a suite of five vulnerabilities. The flaws can be combined to allow an attacker full control over a robot, giving an intruder the capability to alter firmware, steal chat logs, pictures, or even access live video streams. Vecna has already patched two of the five vulnerabilities and is in the process of addressing the other three. The flaws were discovered earlier this year by Dan Regalado, a security researcher with IoT cyber-security firm Zingbox. The vulnerabilities affect Vecna VGo Celia, a telepresence robot that can be deployed in the field but controlled from a remote location.