"You good?" a man asked two narcotics detectives late in the summer of 2015. The detectives had just finished an undercover drug deal in Brentwood, a predominately black neighborhood in Jacksonville, Florida, that is among the poorest in the country, when the man unexpectedly approached them. One of the detectives responded that he was looking for $50 worth of "hard"– slang for crack cocaine. The man disappeared into a nearby apartment and came back out to fulfill the detective's request, swapping the drugs for money. "You see me around, my name is Midnight," the dealer said as he left.
Popular AI-powered selfie program FaceApp was forced to pull new filters that allowed users to modify their pictures to look like different races, just hours after it launched it. The app, which initially became famous for its features that let users edit images to look older or younger, or add a smile, launched the new filters around midday on Wednesday. The company initially released a statement arguing that the "ethnicity change filters" were "designed to be equal in all aspects". Wow... FaceApp really setting the bar for racist AR with its awful new update that includes Black, Indian and Asian "race filters" pic.twitter.com/Lo5kmLvoI9 It's not even the first time the app has waded into this storm.
Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in decision-making software. When I was a computer science undergraduate I was working on social robotics – the robots use computer vision to detect the humans they socialise with. I discovered I had a hard time being detected by the robot compared to lighter-skinned people. Thinking about yourself – growing up in Mississippi, a Rhodes Scholar, a Fulbright Fellow and now at MIT – do you wonder that if those admissions decisions had been taken by algorithms you might not have ended up where you are?
The creator of an app which changes your selfies using artificial intelligence has apologised because its "hot" filter automatically lightened people's skin. So I downloaded this app and decided to pick the "hot" filter not knowing that it would make me white. Yaroslav Goncharov, the creator and CEO of FaceApp, apologised for the feature, which he said was a side-effect of the "neural network". "It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour."
An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases. The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons. In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained. However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.
The creator of a chatbot which overturned more than 160,000 parking fines and helped vulnerable people apply for emergency housing is now turning the bot to helping refugees claim asylum. The original DoNotPay, created by Stanford student Joshua Browder, describes itself as "the world's first robot lawyer", giving free legal aid to users through a simple-to-use chat interface. The chatbot, using Facebook Messenger, can now help refugees fill in an immigration application in the US and Canada. Those in the UK are told they need to apply in person, and the bot helps fill out an ASF1 form for asylum support.
Twitter has blocked federally funded "domestic spy centers" from using a powerful social media monitoring tool after public records revealed that the government had special access to users' information for controversial surveillance efforts. The American Civil Liberties Union of California discovered that so-called fusion centers, which collect intelligence, had access to monitoring technology from Dataminr, an analytics company partially owned by Twitter. Records that the ACLU obtained uncovered that a fusion center in southern California had access to Dataminr's "geospatial analysis application", which allowed the government to do location-based tracking as well as searches tied to keywords. In October, the ACLU obtained government records revealing that Twitter, Facebook and Instagram had provided users' data to Geofeedia, a software company that aids police surveillance programs and has targeted protesters of color.
The proposal, part of the digital economy bill, would force internet service providers to block sites hosting content that would not be certified for commercial DVD sale by the British Board of Film Classification (BBFC). It is contained within provisions of the bill designed to enforce strict age verification checks to stop children accessing adult websites. Pictures and videos that show spanking, whipping or caning that leaves marks, and sex acts involving urination, female ejaculation or menstruation as well as sex in public are likely to be caught by the ban – in effect turning back the clock on Britain's censorship regime to the pre-internet era. A spokeswoman for the BBFC said it would also check whether sites host "pornographic content that we would refuse to classify".
In a promotional video for Amazon's Echo virtual assistant device, a young girl no older than 12 asks excitedly: "Is it for me?". "This is part of the initial wave of marketing to children using the internet of things," says Jeff Chester, executive director of the Center for Digital Democracy, a privacy advocacy group that helped write the law. Khaliah Barnes, associate director of the Electronic Privacy Information Center (EPIC), believes that by showing pre-teenage children using voice-activated AI devices, Amazon, Google and Apple are admitting their services are aimed at youngsters. Online devices have replaced TV as the babysitter, and companies will know there's a child there by the ... interaction However, COPPA forbids a company from storing a child's personal information, including recordings of their voice, without the explicit, verifiable consent of their parents.
The company's chief executive Satya Nadella took to the stage at Microsoft's Build developer conference to announced a new BotFramework, which will allow developers to build bots that respond to chat messages sent via Skype, Slack, Telegram, GroupMe, emails and text messages. The announcement came on the same day that the company had had to pull its chatbot experiment Tay from Twitter after it tweeted about taking drugs and started spamming users. Nadella said: "As an industry, we are on the cusp of a new frontier that pairs the power of natural human language with advanced machine intelligence." For Microsoft the move is about injecting itself into the lives of those who are not Microsoft service users.