Executives from Amazon, Apple, AT&T, Charter Communications, Google, and Twitter are heading to Washington Wednesday to testify before the Senate Commerce Committee on the topic of privacy. As ever, the main question will be: Are these companies doing enough to protect consumer privacy, and if not, what should Congress do about it? It has been the backdrop to just about every hearing with tech leaders over the last year--and there have been many. And yet, the threat of regulation carries new weight this time around. Over the summer, California passed the country's first data privacy bill, giving residents unprecedented control over their data.
It has been, to be quite honest, a fairly bad week, as far as weeks go. But despite the sustained downbeat news, a few good things managed to happen as well. California has passed the strongest digital privacy law in the United States, for starters, which as of 2020 will give customers the right to know what data companies use, and to disallow those companies from selling it. It's just the latest in a string of uncommonly good bits of privacy news, which included last week's landmark Supreme Court decision in Carpenter v. US. That ruling will require law enforcement to get a warrant before accessing cell tower location data.
Google has unveiled a set of principles for ethical AI development and deployment, and announced that it will not allow its AI software to be used in weapons or for "unreasonable surveillance". In a detailed blog post, CEO Sundar Pichai said that Google would not develop technologies that cause, or are likely to cause, harm. "Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints," he explained. Google will not allow its technologies to be used in weapons or in "other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people", he said. Also on the no-go list are "technologies that gather or use information for surveillance, violating internationally accepted norms", and those "whose purpose contravenes widely accepted principles of international law and human rights".
Do you have a right to know if you're talking to a bot? Does it have the right to keep that information from you? Those questions have been stirring in the minds of many since well before Google demoed Duplex, a human-like AI that makes phone calls on a user's behalf, earlier this month. Bots -- online accounts that appear to be controlled by a human, but are actually powered by AI -- are now prevalent all across the internet, specifically on social media sites. While some people think legally forcing these bots to "out" themselves as non-human would be beneficial, others think doing so violates the bot's right to free speech.
This week we were treated to a veritable carnival attraction as Mark Zuckerberg, CEO of one of the largest tech companies in the world, testified before Senate committees about privacy issues related to Facebook's handling of user data. Besides highlighting the fact that most United States senators -- and most people, for that matter -- do not understand Facebook's business model or the user agreement they've already consented to while using Facebook, the spectacle made one fact abundantly clear: Zuckerberg intends to use artificial intelligence to manage the censorship of hate speech on his platform. Over the two days of testimony, the plan for using algorithmic AI for potential censorship practices was discussed multiple times under the auspices of containing hate speech, fake news, election interference, discriminatory ads, and terrorist messaging. In fact, AI was mentioned at least 30 times. Zuckerberg claimed Facebook is five to ten years away from a robust AI platform.
Nick Monaco is a research associate at Alphabet's human rights focused think-tank and technology incubator Jigsaw. He is also a research associate on the Computational Propaganda Project at the Oxford Internet Institute, University of Oxford. Samuel Woolley is the Director of Research of the Computational Propaganda Project at the Oxford Internet Institute, University of Oxford. A troubling trend is sweeping Silicon Valley--big tech acquiescing to digital authoritarianism to gain access to the Chinese market. In July, Apple removed VPNs from its Chinese app store and announced plans to build a data center in Guizhou to comply with China's new draconian cybersecurity laws.
A viral app that added Asian, Black, Caucasian and Indian filters to people's selfies has removed them after being accused of racism. The update which launched yesterday was met with backlash - with many people criticising it for propagating racial stereotypes. The filters drew comparison with'blackface' and'yellowface' - when white people wear make up to appear to be from a different ethnic group. The filters drew comparison with'blackface' and'yellowface' - when white people wear make up to appear to be from a different ethnic group. The app uses Artificial Intelligence to transform faces.
The South by Southwest Conference promises to have a very different tone than last year, when then-President Obama was warmly welcomed for a keynote presentation on civic engagement in the 21st century. For the creators, marketers and entrepreneurs descending this weekend on Austin, Texas, politics in the wake of President Trump will surely be top of mind, perhaps even overshadowing some of the innovation in virtual reality and artificial intelligence. Instead of undermining the value for marketers eager to enlist technology in their work, however, the dynamic might highlight connections that are increasingly important to recognize. "Rather than a piece of technology or launch of a new app, this year's conference will really be about the way all the things happening in politics are being threaded through what everyone does," said David Grant, president of PopSugar Studios, the video unit at publisher PopSugar. "While in the past typically the focus is on a few new toys to play with, this year it is about how do these new toys affect journalism access and the ability to distinguish between real and fake news?"
In fact, in many countries, the internet, the very thing that was supposed to smash down the walls of authoritarianism like a sledgehammer of liberty, has been instead been co-opted by those very regimes in order to push their own agendas while crushing dissent and opposition. And with the emergence of conversational AI -- the technology at the heart of services like Google's Allo and Jigsaw or Intel's Hack Harassment initiative -- these governments could have a new tool to further censor their citizens. Turkey, Brazil, Egypt, India and Uganda have all shut off internet access when politically beneficial to their ruling parties. Nations like Singapore, Russia and China all exert outsize control over the structure and function of their national networks, often relying on a mix of political, technical and social schemes to control the flow of information within their digital borders. The effects of these policies are self-evident.