A growing backlash against face recognition suggests the technology has a reached a crucial tipping point, as battles over its use are erupting on numerous fronts. Face-tracking cameras have been trialled in public by at least three UK police forces in the last four years. A court case against one force, South Wales Police, began earlier this week, backed by human rights group Liberty. Ed Bridges, an office worker from Cardiff whose image was captured during a test in 2017, says the technology is an unlawful violation of privacy, an accusation the police force denies. Avoiding the camera's gaze has got others in trouble.
Today global history was made, as the first intergovernmental standard on artificial intelligence (AI) was adopted by the OECD--a geopolitical milestone achievement. There is a worldwide investment rush underway in artificial intelligence (AI) technology. Both public and private investment funding are pouring into AI, as nations and corporations seek to gain economic benefits and competitive advantages through automation. IDC estimates the global spending on cognitive and AI systems to reach $57.6 billion by 2021. Last year the UK government announced plans to invest £300 million in AI.
Welcome to EURACTIV's Digital Brief, your weekly update on all things digital in the EU. You can subscribe to the newsletter here. With the Brits and the Dutch heading to the polls today, the big news of the week is the story that Facebook has removed around 80 pages spreading fake news or using tactics aimed at unfairly influencing the polls. The takedowns came following a discovery by the human rights group Avaaz, in which it uncovered far-right disinformation networks in France, UK, Germany, Spain, Italy and Poland, posting content that was viewed an estimated 533 million times over the past three months. EURACTIV Digital went to investigate further and paid Avaaz a visit at their recently opened'Citizens' War Room' in Brussels (pictured below).
Disinformation and Fake News are hardly anything new but the power of both is increasing exponentially because of the power of social media. Websites like Twitter and Facebook serve up information, images and events based on what they know about our likes, dislikes and desires, thereby supporting our prejudices and undermining open and tolerant debate. But Fake News is yesterday's news. Deep Fakes is where Fake News might be moving next and Deep Fakes could be have a bigger impact and even harder to spot, address or undermine. Deep Fakes is the use of deep learning – a branch of machine learning or artificial intelligence – to marry digital images with fake or forged audio files.
Hundreds of millions of people across Europe are set to cast their vote over the coming days in the European elections, which run from Thursday to Sunday. Many of them will use an online "questionnaire", such as YourVoteMatters in the UK and several others elsewhere, to navigate through the complex world of European politics and find the party most closely aligned with their views. But what if there was an artificial intelligence that could tell you how to vote instead? "AI will be used when making lots of different decisions, whether it is dating or in business. Why would we not use it in elections?"
Beginning as early as next year, many people are expected to have more conversations with digital voice assistants than with their spouse. Presently, the vast majority of these assistants--from Amazon's Alexa to Microsoft's Cortana--are projected as female, in name, sound of voice and'personality'. 'I'd blush if I could', a new UNESCO publication produced in collaboration with Germany and the EQUALS Skills Coalition holds a critical lens to this growing and global practice, explaining how it: The title of the publication borrows its name from the response Siri, Apple's female-gendered voice assistant used by nearly half a billion people, would give when a human user told'her', "Hey Siri, you're a bi***." Siri's submissiveness in the face of gender abuse – and the servility expressed by so many other digital assistants projected as young women – provides a powerful illustration of gender biases coded into technology products, pervasive in the technology sector and apparent in digital skills education. According to Saniye Gülser Corat, UNESCO's Director for Gender Equality, "The world needs to pay much closer attention to how, when and whether AI technologies are gendered and, crucially, who is gendering them."
For those who like their dessert first: here's the finished model, and here's the colab for this example. A rather empty user-interface should show up on your screen. In the sidebar, click the Library-dropdown, and select TensorFlow. Now the code for our model will use TensorFlow instead of PyTorch. Next, click on the Theme-dropdown and select "orange".