Results


The Importance of Decoding Unconscious Bias in AI Big Cloud Recruitment

#artificialintelligence

Despite its widespread adoption, Artificial Intelligence still has a long way to go in terms of diversity and inclusion. It's a subject close to our hearts as a company, and quite frankly, something that should be celebrated and shouted about given all the doom and gloom we're so often bombarded with in today's media. From healthcare, and sustainable cities, to climate change and industry, investment in AI is making an impact in many areas. Applications of machine learning and deep learning help shape the trajectories of our daily lives, so much so that we are barely even aware of it. However, all of this do-gooding aside, one of the biggest obstacles in AI programming is that of the inherent bias that exists within it.


You weren't supposed to actually implement it, Google

#artificialintelligence

Last month, I wrote a blog post warning about how, if you follow popular trends in NLP, you can easily accidentally make a classifier that is pretty racist. To demonstrate this, I included the very simple code, as a "cautionary tutorial".


AI Research Is in Desperate Need of an Ethical Watchdog

#artificialintelligence

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. They wanted to protect gay people. "[Our] findings expose a threat to the privacy and safety of gay men and women," wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.


Google's comment ranking system will be a hit with the alt-right

Engadget

A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who's ever heard the phrase "don't read the comments." According to The Great Tech Panic: Trolls Across America, Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia "is the least toxic city in the US." The underlying API used to determine "toxicity" scores phrases like "I am a gay black woman" as 87 percent toxicity, and phrases like "I am a man" as the least toxic. The API, called Perspective, is made by Google's Alphabet within its Jigsaw incubator.


Instagram CEO Kevin Systrom on Free Speech, Artificial Intelligence, and Internet Addiction.

WIRED

I sat down with Kevin Systrom, the CEO of Instagram, in June to interview him for my feature story, "Instagram's CEO Wants to Clean Up the Internet," and for "Is Instagram Going Too Far to Protect Our Feelings," a special that ran on CBS this week. It was a long conversation, but here is a 20-minute overview in which Systrom talks about the artificial intelligence Instagram has been developing to filter out toxic comments before you even see them. He also discusses free speech, the possibility of Instagram becoming too bland, and whether the platform can be considered addictive. Our conversation occurred shortly before Instagram introduced the AI to the public. A transcript of the conversation follows. So what I want to do in this story is I want to get into the specifics of the new product launch and the new things you're doing and the stuff that's coming out right now and the machine learning. But I also want to tie it to a broader story about Instagram, and how you decided to prioritize niceness and how it became such a big thing for you and how you reoriented the whole company. So I'm gonna ask you some questions about the specific products and then some bigger questions NT: All right so let's start at the beginning. I know that from the very beginning you cared a lot about comments. You cared a lot about niceness and, in fact, you and your co-founder Mike Krieger would go in early on and delete comments yourself.


A collection of 13,500 insults lobbed by Wikipedia editors is helping researchers understand and fight trolls

#artificialintelligence

Misogyny, racism, profanity--a collection of more than 13,500 online personal attacks has it all. The nastygrams came from the discussion pages of Wikipedia. The collection, along with over 100,000 more benign posts, has been released by researchers from Alphabet and the Wikimedia Foundation, the nonprofit behind Wikipedia. They say the data will boost efforts to train software to understand and police online harassment. "Our goal is to see how can we help people discuss the most controversial and important topics in a productive way all across the Internet," says Lucas Dixon, chief research scientist at Jigsaw, a group inside Alphabet that builds technology in service of causes such as free speech and fighting corruption (see "If Only AI Could Save Us From Ourselves").


Nowhere to hide

BBC News

Helen of Troy may have had a "face that launch'd a thousand ships", according to Christopher Marlowe, but these days her visage could launch a lot more besides. She could open her bank account with it, authorise online payments, pass through airport security, or raise alarm bells as a potential troublemaker when entering a city (Troy perhaps?). This is because facial recognition technology has evolved at breakneck speed, with consequences that could be benign or altogether more sinister, depending on your point of view. High-definition cameras combined with clever software capable of measuring the scores of "nodal points" on our faces - the distance between the eyes, the length and width of the nose, for example - are now being combined with machine learning that makes the most of ever-enlarging image databases. Applications of the tech are popping up all round the world.


Can off the shelf AI Vision systems detect and censor art nude photographs? - DIY Photography

#artificialintelligence

Question: can AI vision systems from Microsoft and Google, which are available for free to anybody, identify NSFW (not safe for work, nudity) images? Can this identification be used to automatically censor images by blacking out or blurring NSFW areas of the image? Method: I spent a few hours creating in some rough code in Microsoft office to find files on my computer and send them to Google Vision and Microsoft Vision so they could be analysed. I spent a few hours over the weekend just knocking some very rough code. Yes, they did reasonably well at (a) identifying images that could need censoring and (b) identifying where on the image things should be blocked out.