While technological progress has always been a driving factor for societies, AI-based technologies stand out as a game changer. Offering vast opportunities for the benefit of people, they have the power to significantly influence the exercise of human rights and to disrupt the functioning of democratic institutions. Effects are transversal and evident in all spheres, as AI gadgets are becoming part of daily routines, gradually able to predict, reinforce and possibly control human behaviours. The Council of Europe, the continent's leading human rights organisation, is addressing the impacts of AI on human rights, democracy and the rule of law. The Council of Europe has identified AI as a subject deserving its closest attention.
A new series of last year's TV hit The Rap of China has kicked off and with it comes the show's first female judge joining the likes of Kris Wu and MC Hotdog. The selection of Hong Kong singer G.E.M., real name Gloria Tang Tsz-Kei, has raised some eyebrows among critics due to the use of artificial intelligence in her selection, as well as her hip-hop credentials. "G.E.M joining The Rap of China is a bit of an embarrassment as her previous image falls short in terms of rap elements," one music critic after learning of the singer's inclusion on the talent show's judging panel. Although the news has drawn widespread criticism from a host of music critics, the move was not unexpected. With G.E.M's involvement, this show hit a new peak in terms of viewing figures in its first week.
A couple of years ago, as Brian Brackeen was preparing to pitch his facial recognition software to a potential customer as a convenient, secure alternative to passwords, the software stopped working. Panicked, he tried adjusting the room's lighting, then the Wi-Fi connection, before he realized the problem was his face. Brackeen is black, but like most facial recognition developers, he'd trained his algorithms with a set of mostly white faces. He got a white, blond colleague to pose for the demo, and they closed the deal. It was a Pyrrhic victory, he says: "It was like having your own child not recognize you."
When Google Translate converts news articles written in Spanish into English, phrases referring to women often become'he said' or'he wrote'. Software designed to warn people using Nikon cameras when the person they are photographing seems to be blinking tends to interpret Asians as always blinking. Word embedding, a popular algorithm used to process and analyse large amounts of natural-language data, characterizes European American names as pleasant and African American ones as unpleasant. These are just a few of the many examples uncovered so far of artificial intelligence (AI) applications systematically discriminating against specific populations. Biased decision-making is hardly unique to AI, but as many researchers have noted1, the growing scope of AI makes it particularly important to address.
Microsoft president Brad Smith speaks at the 2017 annual Microsoft shareholders meeting in Bellevue, WA. (AP Photo/Elaine Thompson) This morning Microsoft President Brad Smith posted an essay on the company's blog that raises important questions about the human rights challenges related to facial recognition technology. Microsoft, and in particular, Smith, have led the tech industry in addressing human rights issues that inevitably grow from the spreading use of emerging technologies. As Smith points out, these new technological capacities are often a force for good, but are also subject to manipulation and can cause great harm. What is clear is that these new technologies are now part of our lives and will play an ever-greater role in the future. Smith rightly focuses on vexing challenges relating to the governance of facial recognition technologies, a rapidly evolving area which requires new models in which both governments and companies assume greater responsibilities.
A photo from a summer camp posted to the camp's website so parents can view them. Venture capital-backed Waldo Photos has been selling the service to identify specific children in the flood of photos provided daily to parents by many sleep-away camps. Camps working with the Austin, Texas-based company give parents a private code to sign up. When the camp uploads photos taken during activities to its website, Waldo's facial recognition software scans for matches in the parent-provided headshots. Once it finds a match, the Waldo system (as in "Where's Waldo?") then automatically texts the photos to the child's parents.
Facial recognition tech is becoming more sophisticated, with some firms claiming it can even read our emotions and detect suspicious behaviour. But what implications does this have for privacy and civil liberties? Facial recognition tech has been around for decades, but it has been progressing in leaps and bounds in recent years due to advances in computing vision and artificial intelligence (AI), tech experts say. It is now being used to identify people at borders, unlock smart phones, spot criminals, and authenticate banking transactions. But some tech firms are claiming it can also assess our emotional state.
To find out more, click here. With anti-migrant violence hitting a fever pitch, victims ask why Greek authorities have carried out so few arrests. How a homegrown burger joint pioneered a food revolution and decades later gave a young, politicised class its identity. Kombo Yannick is one of the many African asylum seekers braving the longer Latin America route to the US. Answer as many correct questions in 90 seconds to win the World Cup with your favourite team.
Microsoft is calling on the U.S. government to regulate facial recognition technology. Microsoft has called on the government to step up and regulate facial recognition technology. In a blog post, Microsoft President Brad Smith called for "thoughtful government regulation" and "the development of norms" around using facial recognition technology. "Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime," Smith wrote. Smith also said Microsoft, which has supplied facial recognition to some businesses, already has rejected some customers' requests to deploy the technology in situations involving "human rights risks."
Technology might produce a spike in slavery, and it's not related to your smartphone addiction. The Human Rights Outlook 2018 report released on Thursday by risk consultancy Verisk Maplecroft highlights how the rise in automation and robot manufacturing could force out of their jobs millions of people in Southeast Asia, with women disproportionately affected in the garment, textile and footwear industry. In both Vietnam and Cambodia, for example, over 85% of jobs in those sectors are at high risk of automation, and over 76% of these jobs are held by women, the study says. Automation might lead to a downward spiral, making exploited workers even more vulnerable to labor abuses and an easy prey to human traffickers and slaveholders as they compete for a diminishing supply of low-skilled and low-paid jobs. "Without concrete measures from governments to adapt and educate future generations to function alongside machines, it could be a race to the bottom for many workers," Alexandra Channer, the consultancy's head of human rights, said in a statement.