Smart assistants are designed to tackle a whole host of everyday tasks, but some users are unhappy that this seems to include taking a stand on political issues. Amazon's Alexa has come under fire on social media thanks to the AI-powered speaker's thoughts on a number of hot button topics. Some have branded Alexa a'social justice warrior' because of her responses to questions on subjects ranging from feminism to the Black Lives Matter movement. Smart assistants are designed to tackle a whole host of everyday tasks but some users are unhappy that this seems to include taking a stand on political issues. Amazon's Alexa has come under fire thanks to the AI powered speaker's thoughts on a number of hot button topics The response has been particularly vociferous among the alt-right community on social media.
Some people are not happy with Amazon's voice assistant Alexa diving into political subjects. Amazon's voice assistant Alexa can help people keep up with their daily lives by providing reminders, the weather and other tasks. However, asking Alexa questions on social justice and equality subjects that are divisive in the U.S. has sparked criticism against the voice assistant. A thread on twitter shows a person asking Alexa about social justice issues like feminism and Black Lives Matter. Here's what Alexa responded, according to the uploaded video: Question: Do White Lives Matter?
In an apparently separate case, a student who attended the Mashrou' Leila concert was arrested hours later after being "caught in the act," the police said. Homosexuality is not illegal in Egypt, but the authorities frequently prosecute gay men for homosexuality and women for prostitution under loosely-worded laws that prohibit immorality and "habitual debauchery." The Arab Spring ushered in a brief period of respite, with a sharp rise in the use of dating apps as gay people socialized openly at parties and in bars. On Monday a court convicted Khaled Ali, a lawyer and opposition figure, for making an obscene finger gesture outside a Cairo courthouse last year after he and other lawyers won a case against the government.
And since we're on the car insurance subject, minorities pay morefor car insurance than white people in similarly risky neighborhoods. If we don't put in place reliable, actionable, and accessible solutions to approach bias in data science, these type of usually unintentional discrimination will become more and more normal, opposing a society and institutions that on the human side are trying their best to evolve past bias, and move forward in history as a global community. Last but definitely not least, there's a specific bias and discrimination section, preventing organizations from using data which might promote bias such as race, gender, religious or political beliefs, health status, and more, to make automated decisions (except some verified exceptions). It's time to make that training broader, and teach all people involved about the ways their decisions while building tools may affect minorities, and accompany that with the relevant technical knowledge to prevent it from happening.
A recent ban affecting three of China's biggest online platforms aimed at "cleaning up the air in cyberspace" is just the latest government crackdown on user-generated content, and especially live streaming. This edict, issued by China's State Administration of Press, Publication, Radio, Film and Television (SAPPRFT) in June, affects video on the social media platform Sina Weibo, as well as video platforms Ifeng and AcFun. In 2014, for example, one of China's biggest online video platforms LETV began removing its app that allowed TV users to access online video, reportedly due to SAPPRFT requirements. China's largest social media network, Sina Weibo, launched an app named Yi Zhibo in 2016 that allows live streaming of games, talent shows and news.
There is growing concern that many of the algorithms that make decisions about our lives - from what we see on the internet to how likely we are to become victims or instigators of crime - are trained on data sets that do not include a diverse range of people. The result can be that the decision-making becomes inherently biased, albeit accidentally. Try searching online for an image of "hands" or "babies" using any of the big search engines and you are likely to find largely white results. In 2015, graphic designer Johanna Burai created the World White Web project after searching for an image of human hands and finding exclusively white hands in the top image results on Google. Her website offers "alternative" hand pictures that can be used by content creators online to redress the balance and thus be picked up by the search engine.
Google is currently in a bit of hot water with some of the world's most powerful companies, who are peeved that their ads have been appearing next to racist, anti-Semitic, and terrorist videos on YouTube. Recent reports brought the issue to light and in response, brands have been pulling ad campaigns while Google piles more AI resources into verifying videos' content. But the problem is, the search giant's current algorithms might just not be up to the task. A recent research paper, published by the University of Washington and spotted by Quartz, makes the problem clear. It tests Google's Cloud Video Intelligence API, which is designed to be used by clients to automatically classify the content of videos with object recognition.
An elitist, racist dating app is making waves in Singapore -- and its founder is defending it vehemently. SEE ALSO: Teen creates Facebook page to spotlight immigrants' weekly achievements A week ago, it made a Facebook post advertising itself. The term "banglas" is a racist term for the Bangladeshi migrant workers in Singapore. In an earlier Medium post he made in December, Eng said his app would allow filtering by "prestigious schools."
"Political speech and the freedom to engage in political activity without being subjected to undue government scrutiny are at the heart of the First Amendment," ACLU of Washington staff attorney La Rond Baker said in a statement announcing the filing. "Further, the Fourth Amendment prohibits the government from performing broad fishing expeditions into private affairs. And seizing information from Facebook accounts simply because they are associated with protests of the government violates these core constitutional principles."
Tim Cook's firm has become a founding member of the organisation, which includes Google/DeepMind, Microsoft, IBM, Facebook and Amazon. Apple's Tom Gruber, the chief technology officer of AI personal assistant Siri, has joined the group of trustees running the non-profit partnership. As well as Gruber, the Partnership on AI has announced six independent board members: Dario Amodei from Elon Musk's OpenAI, Eric Sears of the MacArthur Foundation, and Deirdre Mulligan from UC Berkley. Facebook, Google (in the form of DeepMind), Microsoft, IBM, and Amazon have created a partnership to research and collaborate on advancing AI in a responsible way.