A photo from a summer camp posted to the camp's website so parents can view them. Venture capital-backed Waldo Photos has been selling the service to identify specific children in the flood of photos provided daily to parents by many sleep-away camps. Camps working with the Austin, Texas-based company give parents a private code to sign up. When the camp uploads photos taken during activities to its website, Waldo's facial recognition software scans for matches in the parent-provided headshots. Once it finds a match, the Waldo system (as in "Where's Waldo?") then automatically texts the photos to the child's parents.
It has been, to be quite honest, a fairly bad week, as far as weeks go. But despite the sustained downbeat news, a few good things managed to happen as well. California has passed the strongest digital privacy law in the United States, for starters, which as of 2020 will give customers the right to know what data companies use, and to disallow those companies from selling it. It's just the latest in a string of uncommonly good bits of privacy news, which included last week's landmark Supreme Court decision in Carpenter v. US. That ruling will require law enforcement to get a warrant before accessing cell tower location data.
Florida has stopped testing Amazon's facial recognition program after rights groups raised concerns that the service could be used in ways that could violate civil liberties. Orlando ended a pilot program last week after its contract with Amazon.com Inc to use its Rekognition service expired. 'Partnering with innovative companies to test new technology - while also ensuring we uphold privacy laws and in no way violate the rights of others - is critical to us as we work to further keep our community safe,' the city and the Orlando Police Department said in a joint statement Monday. Orlando was one of several U.S. jurisdictions that Amazon has pitched its service to since unveiling it in late 2016 as a way to detect offensive content and secure public safety.
Google has unveiled a set of principles for ethical AI development and deployment, and announced that it will not allow its AI software to be used in weapons or for "unreasonable surveillance". In a detailed blog post, CEO Sundar Pichai said that Google would not develop technologies that cause, or are likely to cause, harm. "Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints," he explained. Google will not allow its technologies to be used in weapons or in "other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people", he said. Also on the no-go list are "technologies that gather or use information for surveillance, violating internationally accepted norms", and those "whose purpose contravenes widely accepted principles of international law and human rights".
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society. Amazon's object and facial recognition software, which the company claims offers real-time detection across tens of millions of mugs, including "up to 100 faces in challenging crowded photos." After its launch in late 2016, Amazon Web Services started marketing the visual surveillance tool (which it dubbed "Rekognition") to law enforcement agencies around the country--including partnering directly with the police department in Orlando and a sheriff's department in Oregon. But now, as April Glaser reports, civil rights groups are pushing back. Last week, a coalition including the ACLU, Human Rights Watch, and the Council on American-Islamic Relations, sent an open letter expressing their "profound concerns" that governments could easily abuse the technology to target communities of color, undocumented immigrants, and political protestors.
This week we were treated to a veritable carnival attraction as Mark Zuckerberg, CEO of one of the largest tech companies in the world, testified before Senate committees about privacy issues related to Facebook's handling of user data. Besides highlighting the fact that most United States senators -- and most people, for that matter -- do not understand Facebook's business model or the user agreement they've already consented to while using Facebook, the spectacle made one fact abundantly clear: Zuckerberg intends to use artificial intelligence to manage the censorship of hate speech on his platform. Over the two days of testimony, the plan for using algorithmic AI for potential censorship practices was discussed multiple times under the auspices of containing hate speech, fake news, election interference, discriminatory ads, and terrorist messaging. In fact, AI was mentioned at least 30 times. Zuckerberg claimed Facebook is five to ten years away from a robust AI platform.
Residents of Shenzhen don't dare jaywalk. Since April 2017, this city in China's Guangdong province has deployed a rather intense technique to deter jaywalking. Anyone who crosses against the light will find their face, name, and part of their government ID number displayed on a large LED screen above the intersection, thanks to facial recognition devices all over the city. If that feels invasive, you don't even know the half of it. Now, Motherboard reports that a Chinese artificial intelligence company is partnering the system with mobile carriers, so that offenders receive a text message with a fine as soon as they are caught.
As Apple fans worldwide make lines outside stores to purchase the new iPhone X, the device's Face ID feature is being scrutinized by advocacy groups. The American Civil Liberties Union and the Center for Democracy and Technology told Reuters their concerns on whether Apple can enforce privacy rules for the iPhone X's facial recognition technology. The Face ID feature works to unlock the device, confirm Apple Pay payments, use Animoji and much more. It will also work with third-party apps. Face ID runs through the iPhone X's TrueDepth camera system, which maps the user's face with 30,000 infrared dots.
Helen of Troy may have had a "face that launch'd a thousand ships", according to Christopher Marlowe, but these days her visage could launch a lot more besides. She could open her bank account with it, authorise online payments, pass through airport security, or raise alarm bells as a potential troublemaker when entering a city (Troy perhaps?). This is because facial recognition technology has evolved at breakneck speed, with consequences that could be benign or altogether more sinister, depending on your point of view. High-definition cameras combined with clever software capable of measuring the scores of "nodal points" on our faces - the distance between the eyes, the length and width of the nose, for example - are now being combined with machine learning that makes the most of ever-enlarging image databases. Applications of the tech are popping up all round the world.