Goto

Collaborating Authors

Results


Grindr has removed its controversial ethnicity filters

Mashable

The killing of George Floyd by police officers has spurred not only protests across the United States, but also -- often embarrassing -- responses from brands. The queer dating app Grindr offered its own statement on Twitter and Instagram on Monday, coinciding with the first day of Pride Month. They will take action including not only donating to both BLM and the Marsha P. Johnson Institute, but also by removing their ethnicity filters for their next app release: We will not be silent. "We will continue to fight racism on Grindr," the statement said, "both through dialogue with our community and zero-tolerance policy for racism and hate speech on our platform." A Grindr spokesperson told Mashable that racism has no place in their community.


Grindr dating app removes ethnicity filter to support Black Lives Matter

The Guardian

Grindr is removing an "ethnicity filter" from its dating app as part of its support for the Black Lives Matter movement, the company announced on Monday. The controversial feature, limited to those who stump up £12.99 a month for the premium version of the app, allows users to sort search results based on reported ethnicity, height, weight and other characteristics. In a statement posted to Instagram, the company said "We stand in solidarity with the #BlackLivesMatter movement and the hundreds of thousands of queer people of color who log in to our app every day. "We will continue to fight racism on Grindr, both through dialogue with our community and a zero-tolerance policy for racism and hate speech on our platform. As part of this commitment, and based on your feedback, we have decided to remove the ethnicity filter from our next release.


Dating app Grindr removes 'ethnicity filter' allowing users to search for potential partners by race

Daily Mail - Science & tech

Dating app Grindr has said it will remove its'ethnicity filter' that allows users to search potential matches by race. Singletons prepared to pay £12.99-a-month for the'premium' service are currently able to sort users based on their ethnicity, weight, height, and other characteristics. But less than 24 hours after its tweet supporting'Black Lives Matter' received widespread condemnation over the filter, the company has said it will delete it. Protests have rocked the US for six days following the death of George Floyd, who was filmed gasping'I can't breathe' as an officer knelt on his neck in Logan County, West Virginia. Writing on Twitter, the app said: 'As part of our commitment to (Black Lives Matter), we have decided to remove the ethnicity filter from our next release.


Android 11: Google postpones release of beta version of major new phone software update, saying 'now is not the time'

The Independent - Tech

Google will postpone the unveiling of its Android 11 update after declaring that "now is not the time". The announcement came as protests continued across US cities, following the death of George Floyd in policy custody. It is one of a range of measures taken by the company to show its support for the Americans protesting against racial inequality. It also added a message to its search page that reads: "We stand in support of racial equality, and all those who search for it". YouTube, which is owned by the same company, has committed $1 million to non-profit Center for Policing Equity in a move it said demonstrated "solidarity against racism and violence".


France's New Online Hate Speech Law Is Fundamentally Flawed

Slate

The solution to online hate speech seems so simple: Delete harmful content, rinse, repeat. But David Kaye, a law professor at the University of California, Irvine, and the U.N. special rapporteur on freedom of expression, says that while laws to regulate hate speech might seem promising, they often aren't that effective--and, perhaps worse, they can set dangerous precedents. This is why France's new social media law, which follows in Germany's footsteps, is controversial across the political spectrum there and abroad. On May 13, France passed "Lutte contre la haine sur internet" ("Fighting hate on the internet"), a law that requires social media platforms to rapidly take down hateful content. Comments that are discriminatory--based on race, gender, disability, sexual orientation, and religion--or sexually abusive have to be removed within 24 hours of being flagged by users.


How China uses its massive surveillance apparatus to track its citizens, keep them in line

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Get all the latest news on coronavirus and more delivered daily to your inbox. China has amassed a vast collection of information about its people in recent years as the Chinese Communist Party continues to deploy its surveillance apparatus to exercise control over its 1.4 billion inhabitants at the expense of privacy. In recent years, China has spent billions to purchase the latest technology like facial recognition, artificial intelligence and other digital technologies to add to its network of monitoring systems.


Human Rights Must be Front and Center for Artificial Intelligence, IoT, and Privacy - Cisco Blogs

#artificialintelligence

Artificial intelligence and the internet of things have the potential to drive unprecedented productivity, reduce pollution, and improve human health. With the rollout of advanced, high speed wireless connectivity known as 5G in the next few years, the number of connected devices will explode: 28.5 billion devices are expected to be networked by 2022 growing to 300 billion connected devices and things by 2030. Examples abound: the city of Barcelona saves $58 million a year using IOT sensors for connected water management, and the World Food Programme's Mobile Vulnerability Analysis and Mapping tool is piloting AI-enabled chatbots to enhance data collection, reach beneficiaries in new ways, and become more effective as a global hunger organization, and my own Apple Watch warns me when my heart is beating abnormally fast (to date, fortunately, only when I'm intentionally exercising). To help networks themselves work better, Cisco recently launched Encrypted Traffic Analytics, an AI-driven solution using machine learning to analyze encrypted network traffic, and automatically identify and eliminate malware threats, allowing the information to remain encrypted and helping protect privacy and data security simultaneously. Technological transformation must be managed to protect and enhance fundamental societal values of human freedom, autonomy and privacy.


Facial recognition is in London. So how should we regulate it?

#artificialintelligence

As the first step on the road to a powerful, high tech surveillance apparatus, it was a little underwhelming: a blue van topped by almost comically intrusive cameras, a few police officers staring intently but ineffectually at their smartphones and a lot of bemused shoppers. As unimpressive as the moment may have been, however, the decision by London's Metropolitan Police to expand its use of live facial recognition (LFR) marks a significant shift in the debate over privacy, security and surveillance in public spaces. Despite dismal accuracy results in earlier trials, the Metropolitan Police Service (MPS) has announced they are pushing ahead with the roll-out of LFR at locations across London. MPS say that cameras will be focused on a small targeted area "where intelligence suggests [they] are most likely to locate serious offenders," and will match faces against a database of individuals wanted by police. The cameras will be accompanied by clear signposting and officers handing out leaflets (it is unclear why MPS thinks that serious offenders would choose to walk through an area full of police officers handing out leaflets to passersby).


Taming State Surveillance: Reconciling Camera Surveillance Technology with Human Rights Obligations - HillNotes

#artificialintelligence

Centralized state camera surveillance is but one component of a burgeoning practice of personal data collection paired with artificial intelligence (AI). Camera surveillance is not inherently unlawful and has long been used at border-crossings, airports, and other high-security areas. However, recent technological advances have contributed to the spread of a more intrusive form of video surveillance that includes powerful, if imperfect, facial recognition abilities and AI decision making. While the technology offers states the ability to, among other things, identify lost children, identify criminals, and monitor threats, the new capacity also raises significant human rights issues. The use of camera surveillance has grown with leaps in technology, including the introduction of videocassette recorders in the 1970s and the internet in the 1990s.


Exploring Gender Imbalance in AI: Numbers, Trends, and Discussions

#artificialintelligence

March is Women's History Month in the US, the UK and Australia, a time to honour women's sometimes underrated contributions to society. According to the US National Women's History Museum, Women's History Month started in 1978 as a local "Women's History Week" celebration in California, with organizers selecting the week to correspond with the March 8 International Women's Day. The US Congress in 1987 passed Public Law 100-9 designating March as the Women's History Month. The past few decades have seen a steady increase in the number of women studying and excelling in the STEM fields. But this is not so in computer science -- the number of women studying or pursuing a career in computer science has been decreasing since around 1990.