Results


Google won't develop AI weapons, announces new ethical strategy Internet of Business

#artificialintelligence

Google has unveiled a set of principles for ethical AI development and deployment, and announced that it will not allow its AI software to be used in weapons or for "unreasonable surveillance". In a detailed blog post, CEO Sundar Pichai said that Google would not develop technologies that cause, or are likely to cause, harm. "Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints," he explained. Google will not allow its technologies to be used in weapons or in "other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people", he said. Also on the no-go list are "technologies that gather or use information for surveillance, violating internationally accepted norms", and those "whose purpose contravenes widely accepted principles of international law and human rights".


Future Tense Newsletter: Amazon Isn't Just Tracking What's in Your Shopping Cart

Slate

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society. Amazon's object and facial recognition software, which the company claims offers real-time detection across tens of millions of mugs, including "up to 100 faces in challenging crowded photos." After its launch in late 2016, Amazon Web Services started marketing the visual surveillance tool (which it dubbed "Rekognition") to law enforcement agencies around the country--including partnering directly with the police department in Orlando and a sheriff's department in Oregon. But now, as April Glaser reports, civil rights groups are pushing back. Last week, a coalition including the ACLU, Human Rights Watch, and the Council on American-Islamic Relations, sent an open letter expressing their "profound concerns" that governments could easily abuse the technology to target communities of color, undocumented immigrants, and political protestors.


Zuckerberg Admits He's Developing Artificial Intelligence to Censor Content

#artificialintelligence

This week we were treated to a veritable carnival attraction as Mark Zuckerberg, CEO of one of the largest tech companies in the world, testified before Senate committees about privacy issues related to Facebook's handling of user data. Besides highlighting the fact that most United States senators -- and most people, for that matter -- do not understand Facebook's business model or the user agreement they've already consented to while using Facebook, the spectacle made one fact abundantly clear: Zuckerberg intends to use artificial intelligence to manage the censorship of hate speech on his platform. Over the two days of testimony, the plan for using algorithmic AI for potential censorship practices was discussed multiple times under the auspices of containing hate speech, fake news, election interference, discriminatory ads, and terrorist messaging. In fact, AI was mentioned at least 30 times. Zuckerberg claimed Facebook is five to ten years away from a robust AI platform.


If you jaywalk in China, facial recognition means you'll walk away with a fine

#artificialintelligence

Residents of Shenzhen don't dare jaywalk. Since April 2017, this city in China's Guangdong province has deployed a rather intense technique to deter jaywalking. Anyone who crosses against the light will find their face, name, and part of their government ID number displayed on a large LED screen above the intersection, thanks to facial recognition devices all over the city. If that feels invasive, you don't even know the half of it. Now, Motherboard reports that a Chinese artificial intelligence company is partnering the system with mobile carriers, so that offenders receive a text message with a fine as soon as they are caught.


iPhone X News: Privacy Experts Concerned About Face ID Before Release Date

International Business Times

As Apple fans worldwide make lines outside stores to purchase the new iPhone X, the device's Face ID feature is being scrutinized by advocacy groups. The American Civil Liberties Union and the Center for Democracy and Technology told Reuters their concerns on whether Apple can enforce privacy rules for the iPhone X's facial recognition technology. The Face ID feature works to unlock the device, confirm Apple Pay payments, use Animoji and much more. It will also work with third-party apps. Face ID runs through the iPhone X's TrueDepth camera system, which maps the user's face with 30,000 infrared dots.


Apple iPhone X's FaceID Technology: What It Could Mean For Civil Liberties

International Business Times

Apple's new facial recognition software to unlock their new iPhone X has raised questions about privacy and the susceptibility of the technology to hacking attacks. Apple's iPhone X is set to go on sale on Nov. 3.


Nowhere to hide

BBC News

Helen of Troy may have had a "face that launch'd a thousand ships", according to Christopher Marlowe, but these days her visage could launch a lot more besides. She could open her bank account with it, authorise online payments, pass through airport security, or raise alarm bells as a potential troublemaker when entering a city (Troy perhaps?). This is because facial recognition technology has evolved at breakneck speed, with consequences that could be benign or altogether more sinister, depending on your point of view. High-definition cameras combined with clever software capable of measuring the scores of "nodal points" on our faces - the distance between the eyes, the length and width of the nose, for example - are now being combined with machine learning that makes the most of ever-enlarging image databases. Applications of the tech are popping up all round the world.