Walking around without being constantly identified by AI could soon be a thing of the past, legal experts have warned. The use of facial recognition software could signal the end of civil liberties if the law doesn't change as quickly as advancements in technology, they say. Software already being trialled around the world could soon be adopted by companies and governments to constantly track you wherever you go. Shop owners are already using facial recognition to track shoplifters and could soon be sharing information across a broad network of databases, potentially globally. Previous research has found that the technology isn't always accurate, mistakenly identifying women and individuals with darker shades of skin as the wrong people.
It's cheaper and easier than ever to add some smarts to your home, and today you can make two of your dumb devices smarter for the lowest price we've ever seen. B&H Photo Video is selling the TP-Link HS107 Wi-Fi dual-outlet smart plug for $25Remove non-product link with the on-page coupon today, down from a list price of $35. The coupon applies $10 off for each plug that you buy, so you can get multiple at this price. This smart plug comes with two individually controlled outlets, so you'll be able to plug in and control two devices at once. Using the connected Kasa app, you'll be able to turn devices off and on or create schedules from anywhere using your mobile device.
WASHINGTON - The top U.S. military officer plans to meet with Google representatives next week amid growing concerns that American companies doing business in China are helping its military gain ground on the U.S. Gen. Joseph Dunford says efforts like Google's artificial intelligence venture in China allow the Chinese military to access and take advantage of U.S.-developed technology. He told an audience at the Atlantic Council on Thursday that it's not in America's national security interest for U.S. companies to help the Chinese military make technological advances. Last week acting Defense Secretary Patrick Shanahan expressed similar concerns and noted that Google is stepping away from some Pentagon contracts. Google has said it would not renew a defense contract involving the use of artificial intelligence to analyze drone video.
With some new and improved features. The second generation of AirPods have finally arrived. Following a hardware update cycle that saw new iPads on Monday and refreshed iMacs on Tuesday, Apple released the long-awaited update to AirPods Wednesday. Keeping the same name and largely same design as the original AirPods first released in 2016, the new earbuds start at $199 and come with a new wireless charging case, the ability to wirelessly summon Siri and Apple's new H1 chip which promises to add an extra hour of talk time. The wireless charging case, which has a little light on the front to indicate when it is charging, uses the same Qi-standard found on recent iPhones and Android devices.
For most people who talk to our technology -- whether it's Amazon's Alexa, Apple Siri or the Google Assistant -- the voice that talks back sounds female. Some people do choose to hear a male voice. Now, researchers have unveiled a new gender-neutral option: Q. "One of our big goals with Q was to contribute to a global conversation about gender, and about gender and technology and ethics, and how to be inclusive for people that identify in all sorts of different ways," says Julie Carpenter, an expert in human behavior and emerging technologies who worked on developing Project Q. The voice of Q was developed by a team of researchers, sound designers and linguists in conjunction with the organizers of Copenhagen Pride week, technology leaders in an initiative called Equal AI and others. They first recorded dozens of voices of people -- those who identify as male, female, transgender or nonbinary.
Fei-Fei Li heard the crackle of a cat's brain cells a couple of decades ago and has never forgotten it. Researchers had inserted electrodes into the animal's brain and connected them to a loudspeaker, filling a lab at Princeton with the eerie sound of firing neurons. "They played the symphony of a mammalian visual system," she told an audience Monday at Stanford, where she is now a professor. The music of the brain helped convince Li to dedicate herself to studying intelligence--a path that led the physics undergraduate to specializing in artificial intelligence, and helping catalyze the recent flourishing of AI technology and use cases like self-driving cars. These days, though, Li is concerned that the technology she helped bring to prominence may not always make the world better.
Folded and sealed with a dollop of red wax, the will of Catharuçia Savonario Rivoalti lay in Venice's State Archives, unread, for more than six and a half centuries. Scholars don't know why the document, written in 1351, was never opened. But to physicist Fauzia Albertin, the three-page document--six pages, folded--was the perfect thickness for an experiment. Albertin, who now works at the Enrico Fermi Research Center in Italy, wanted to read the will without unsealing it. In a 2017 demonstration, Albertin and her team beamed X-rays at the document to photograph the text inside.
The internet is full of lies. That maxim has become an operating assumption for any remotely skeptical person interacting anywhere online, from Facebook and Twitter to phishing-plagued inboxes to spammy comment sections to online dating and disinformation-plagued media. Now one group of researchers has suggested the first hint of a solution: They claim to have built a prototype for an "online polygraph" that uses machine learning to detect deception from text alone. But what they've actually demonstrated, according to a few machine learning academics, is the inherent danger of overblown machine learning claims. In last month's issue of the journal Computers in Human Behavior, Florida State University and Stanford researchers proposed a system that uses automated algorithms to separate truths and lies, what they refer to as the first step toward "an online polygraph system--or a prototype detection system for computer-mediated deception when face-to-face interaction is not available."
Facebook rushed to pull down footage of the New Zealand mass shooter's video from its platform, but it didn't start doing so until after the live broadcast was done. In a new post, Facebook VP of Integrity Guy Rosen discussed the company's successes and shortcomings in addressing the situation, as well as its plans to prevent videos like that from spreading on the social network in the future. He explained that while the platform's AI can quickly detect videos containing suicidal or harmful acts, the shooter's stream didn't trigger it. To be able to train the matching AI to detect that specific type of content, the platform needs big volumes of training data. As Facebook explains, something like that is difficult to obtain as "these events are thankfully rare."