When I was a master's student at MIT, I worked on a number of different art projects that used facial analysis technology. One in particular--called The Aspire Mirror-- would detect my face in a mirror and then display a reflection of something different, based on what inspired me or what I wanted to empathize with. As I was working on it, I realized that the software I was using had a hard time detecting my face. But after I made one adjustment, the software no longer struggled: I put on a white mask. This disheartening moment brought to mind Franz Fanon's book Black Skin White Masks, which interrogates the complexities of changing oneself--putting on a mask to fit the norms or expectations of a dominant culture.
It has been, to be quite honest, a fairly bad week, as far as weeks go. But despite the sustained downbeat news, a few good things managed to happen as well. California has passed the strongest digital privacy law in the United States, for starters, which as of 2020 will give customers the right to know what data companies use, and to disallow those companies from selling it. It's just the latest in a string of uncommonly good bits of privacy news, which included last week's landmark Supreme Court decision in Carpenter v. US. That ruling will require law enforcement to get a warrant before accessing cell tower location data.
Artificial intelligence may put an end to a long-running industry: human trafficking. The average age a minor enters the sex trade in the U.S. is 12 to 14 years old–many of the victims being runaway girls who were sexually abused. Thankfully, Attorney Generals in the U.S. and Mexico are planning to implement a new system that will help to locate victims of human trafficking. Trust Stamp, an Atlanta-based startup, will be providing the'meat and potatoes' of the life-saving technology. According to the company website, "[Trust Stamp] creates proprietary artificial intelligence solutions; researching and leveraging facial biometric science and wide-scale data mining to deliver insightful identity & trust predictions while identifying and defending against fraudulent identity attacks."
Google has unveiled a set of principles for ethical AI development and deployment, and announced that it will not allow its AI software to be used in weapons or for "unreasonable surveillance". In a detailed blog post, CEO Sundar Pichai said that Google would not develop technologies that cause, or are likely to cause, harm. "Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints," he explained. Google will not allow its technologies to be used in weapons or in "other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people", he said. Also on the no-go list are "technologies that gather or use information for surveillance, violating internationally accepted norms", and those "whose purpose contravenes widely accepted principles of international law and human rights".
Do you have a right to know if you're talking to a bot? Does it have the right to keep that information from you? Those questions have been stirring in the minds of many since well before Google demoed Duplex, a human-like AI that makes phone calls on a user's behalf, earlier this month. Bots -- online accounts that appear to be controlled by a human, but are actually powered by AI -- are now prevalent all across the internet, specifically on social media sites. While some people think legally forcing these bots to "out" themselves as non-human would be beneficial, others think doing so violates the bot's right to free speech.
This week we were treated to a veritable carnival attraction as Mark Zuckerberg, CEO of one of the largest tech companies in the world, testified before Senate committees about privacy issues related to Facebook's handling of user data. Besides highlighting the fact that most United States senators -- and most people, for that matter -- do not understand Facebook's business model or the user agreement they've already consented to while using Facebook, the spectacle made one fact abundantly clear: Zuckerberg intends to use artificial intelligence to manage the censorship of hate speech on his platform. Over the two days of testimony, the plan for using algorithmic AI for potential censorship practices was discussed multiple times under the auspices of containing hate speech, fake news, election interference, discriminatory ads, and terrorist messaging. In fact, AI was mentioned at least 30 times. Zuckerberg claimed Facebook is five to ten years away from a robust AI platform.
Essay list of references, paperweight 1984 symbolism essay what the future holds essay steps to write an essay in english vocabulary pro choice abortion essays written. Referencing research paper youtube diagrammed essay writer werkstuk levensbeschouwing euthanasia essay., art and culture critical essays clement greenberg pdf converter essay ending with that was the last time i saw him life without my mobile phone essay. Dissertation fu berlin visual literacy books, employment portfolio reflective essay essay about year round school isaac newton essay zappos anna murray douglass research paper I can't bring myself to write my dissertation acknowledgements yet because once I start I won't stop crying. Poor body image essay tetomilast synthesis essay rehabilitating offenders essay help Powerful essay about all of Louis CK's powerful enablers: -- via @thedailybeast essay on leadership is a relationship essay schreiben deutsch thematic map night and day virginia woolf analysis essay essay about friendship in school?. Djangology analysis essay photo analysis essay have essay on sardar vallabhbhai patel in punjabi respect.