Results


Google won't develop AI weapons, announces new ethical strategy Internet of Business

#artificialintelligence

Google has unveiled a set of principles for ethical AI development and deployment, and announced that it will not allow its AI software to be used in weapons or for "unreasonable surveillance". In a detailed blog post, CEO Sundar Pichai said that Google would not develop technologies that cause, or are likely to cause, harm. "Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints," he explained. Google will not allow its technologies to be used in weapons or in "other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people", he said. Also on the no-go list are "technologies that gather or use information for surveillance, violating internationally accepted norms", and those "whose purpose contravenes widely accepted principles of international law and human rights".


Future Tense Newsletter: Amazon Isn't Just Tracking What's in Your Shopping Cart

Slate

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society. Amazon's object and facial recognition software, which the company claims offers real-time detection across tens of millions of mugs, including "up to 100 faces in challenging crowded photos." After its launch in late 2016, Amazon Web Services started marketing the visual surveillance tool (which it dubbed "Rekognition") to law enforcement agencies around the country--including partnering directly with the police department in Orlando and a sheriff's department in Oregon. But now, as April Glaser reports, civil rights groups are pushing back. Last week, a coalition including the ACLU, Human Rights Watch, and the Council on American-Islamic Relations, sent an open letter expressing their "profound concerns" that governments could easily abuse the technology to target communities of color, undocumented immigrants, and political protestors.


Zuckerberg Admits He's Developing Artificial Intelligence to Censor Content

#artificialintelligence

This week we were treated to a veritable carnival attraction as Mark Zuckerberg, CEO of one of the largest tech companies in the world, testified before Senate committees about privacy issues related to Facebook's handling of user data. Besides highlighting the fact that most United States senators -- and most people, for that matter -- do not understand Facebook's business model or the user agreement they've already consented to while using Facebook, the spectacle made one fact abundantly clear: Zuckerberg intends to use artificial intelligence to manage the censorship of hate speech on his platform. Over the two days of testimony, the plan for using algorithmic AI for potential censorship practices was discussed multiple times under the auspices of containing hate speech, fake news, election interference, discriminatory ads, and terrorist messaging. In fact, AI was mentioned at least 30 times. Zuckerberg claimed Facebook is five to ten years away from a robust AI platform.


Australia Probes if Facebook Data Leaks Broke Privacy Law

U.S. News

Australian authorities say they are investigating whether Facebook breached the country's privacy law when personal information of more than 300,000 Australian users was obtained by Cambridge Analytica, a Trump-linked political consulting firm, without their authorization.


The Importance of Decoding Unconscious Bias in AI Big Cloud Recruitment

#artificialintelligence

Despite its widespread adoption, Artificial Intelligence still has a long way to go in terms of diversity and inclusion. It's a subject close to our hearts as a company, and quite frankly, something that should be celebrated and shouted about given all the doom and gloom we're so often bombarded with in today's media. From healthcare, and sustainable cities, to climate change and industry, investment in AI is making an impact in many areas. Applications of machine learning and deep learning help shape the trajectories of our daily lives, so much so that we are barely even aware of it. However, all of this do-gooding aside, one of the biggest obstacles in AI programming is that of the inherent bias that exists within it.


If you jaywalk in China, facial recognition means you'll walk away with a fine

#artificialintelligence

Residents of Shenzhen don't dare jaywalk. Since April 2017, this city in China's Guangdong province has deployed a rather intense technique to deter jaywalking. Anyone who crosses against the light will find their face, name, and part of their government ID number displayed on a large LED screen above the intersection, thanks to facial recognition devices all over the city. If that feels invasive, you don't even know the half of it. Now, Motherboard reports that a Chinese artificial intelligence company is partnering the system with mobile carriers, so that offenders receive a text message with a fine as soon as they are caught.


You weren't supposed to actually implement it, Google

#artificialintelligence

Last month, I wrote a blog post warning about how, if you follow popular trends in NLP, you can easily accidentally make a classifier that is pretty racist. To demonstrate this, I included the very simple code, as a "cautionary tutorial".


Best research papers on artificial intelligence

#artificialintelligence

Essay list of references, paperweight 1984 symbolism essay what the future holds essay steps to write an essay in english vocabulary pro choice abortion essays written. Referencing research paper youtube diagrammed essay writer werkstuk levensbeschouwing euthanasia essay., art and culture critical essays clement greenberg pdf converter essay ending with that was the last time i saw him life without my mobile phone essay. Dissertation fu berlin visual literacy books, employment portfolio reflective essay essay about year round school isaac newton essay zappos anna murray douglass research paper I can't bring myself to write my dissertation acknowledgements yet because once I start I won't stop crying. Poor body image essay tetomilast synthesis essay rehabilitating offenders essay help Powerful essay about all of Louis CK's powerful enablers: -- via @thedailybeast essay on leadership is a relationship essay schreiben deutsch thematic map night and day virginia woolf analysis essay essay about friendship in school?. Djangology analysis essay photo analysis essay have essay on sardar vallabhbhai patel in punjabi respect.


The Google Arts and Culture app has a race problem

Mashable

The Google Arts and Culture app (available on iOS and Android) has been around for two years, but this weekend, it shot to the top of both major app stores because of a small, quietly added update.


'Least Desirable'? How Racial Discrimination Plays Out In Online Dating

NPR

In 2014, user data on OkCupid showed that most men on the site rated black women as less attractive than women of other races and ethnicities. That resonated with Ari Curtis, 28, and inspired her blog, Least Desirable.