Popular AI-powered selfie program FaceApp was forced to pull new filters that allowed users to modify their pictures to look like different races, just hours after it launched it. The app, which initially became famous for its features that let users edit images to look older or younger, or add a smile, launched the new filters around midday on Wednesday. They allowed a user to edit their image to fit one of four categories: Caucasian, Asian, Indian or Black. Users rapidly pointed out that the feature wasn't particularly sensitively handled: technology site The Verge described it as "tantamount to a sort of digital blackface, 'dressing up' as different ethnicities", while TechCruch said the app "seems to be getting a little too focused on races rather than faces". The company initially released a statement arguing that the "ethnicity change filters" were "designed to be equal in all aspects".
Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in decision-making software. She grew up in Mississippi, gained a Rhodes scholarship, and she is also a Fulbright fellow, an Astronaut scholar and a Google Anita Borg scholar. Earlier this year she won a $50,000 scholarship funded by the makers of the film Hidden Figures for her work fighting coded discrimination. How did you become interested in that area? When I was a computer science undergraduate I was working on social robotics – the robots use computer vision to detect the humans they socialise with.
The creator of an app which changes your selfies using artificial intelligence has apologised because its "hot" filter automatically lightened people's skin. FaceApp is touted as an app which uses "neural networks" to change facial characteristics, adding smiles or making users look older or younger. But users noticed one of the options, initially labelled as "hot" made people look whiter. So I downloaded this app and decided to pick the "hot" filter not knowing that it would make me white. Yaroslav Goncharov, the creator and CEO of FaceApp, apologised for the feature, which he said was a side-effect of the "neural network".
An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases. The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons. In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained. However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.
The creator of a chatbot which overturned more than 160,000 parking fines and helped vulnerable people apply for emergency housing is now turning the bot to helping refugees claim asylum. The original DoNotPay, created by Stanford student Joshua Browder, describes itself as "the world's first robot lawyer", giving free legal aid to users through a simple-to-use chat interface. The chatbot, using Facebook Messenger, can now help refugees fill in an immigration application in the US and Canada. For those in the UK, it helps them apply for asylum support. The London-born developer worked with lawyers in each country, as well as speaking to asylum seekers whose applications have been successful.
Twitter has blocked federally funded "domestic spy centers" from using a powerful social media monitoring tool after public records revealed that the government had special access to users' information for controversial surveillance efforts. The American Civil Liberties Union of California discovered that so-called fusion centers, which collect intelligence, had access to monitoring technology from Dataminr, an analytics company partially owned by Twitter. The ACLU's records prompted the companies to announce that Dataminr had terminated access for all fusion centers and would no longer provide social media surveillance tools to any local, state or federal government entities. The government centers are partnerships between agencies that work to collect vast amounts of information purportedly to analyze "threats". The spy centers, according to the ACLU, target protesters, journalists and others protected by free speech rights while also racially profiling people deemed "suspicious" by law enforcement.
Web users in the UK will be banned from accessing websites portraying a range of non-conventional sexual acts, under a little discussed clause to a government bill currently going through parliament. The proposal, part of the digital economy bill, would force internet service providers to block sites hosting content that would not be certified for commercial DVD sale by the British Board of Film Classification (BBFC). It is contained within provisions of the bill designed to enforce strict age verification checks to stop children accessing adult websites. After pressure from MPs, the culture secretary, Caroline Bradley, announced on Saturday that the government would amend the bill to include powers to block non-compliant websites. In order to comply with the censorship rules, many mainstream adult websites would have to render whole sections inaccessible to UK audiences.
Microsoft has released open source tools for people to build their own chatbots, as it set out its view of the immediate future of artificial intelligence as conversational aids similar to its back-firing Tay experiment. The company's chief executive Satya Nadella took to the stage at Microsoft's Build developer conference to announced a new BotFramework, which will allow developers to build bots that respond to chat messages sent via Skype, Slack, Telegram, GroupMe, emails and text messages. "Bots are the new apps," Nadella said. The announcement came on the same day that the company had had to pull its chatbot experiment Tay from Twitter after it tweeted about taking drugs and started spamming users. It had only been active again for a few hours after previously being deactivated for making racist and sexist comments and denying that the Holocaust happened.