US Senators Roy Blunt and Brian Schatz want to protect people's facial recognition data and make it much harder to sell now that information is treated as currency. The lawmakers have introduced the bipartisan Commercial Facial Recognition Privacy Act of 2019, which prohibits companies from collecting and resharing face data for identifying or tracking purposes without people's consent. The Senators have conjured up the bill because while facial recognition has been used for security and surveillance for decades, it's "now being developed at increasing rates for commercial applications." They argue that a lot of people aren't aware that the technology is being used in public spaces and that companies can collect identifiable info to share or sell to third parties -- similar to how carriers have been selling location data to bounty hunters for years. In addition to prohibiting companies from redistributing or disseminating data, the bill would also require them to notify customers whenever facial recognition is in use.
Facial recognition can log you into your iPhone, track criminals through crowds and identify loyal customers in stores. The technology -- which is imperfect but improving rapidly -- is based on algorithms that learn how to recognize human faces and the hundreds of ways in which each one is unique. To do this well, the algorithms must be fed hundreds of thousands of images of a diverse array of faces. Increasingly, those photos are coming from the internet, where they're swept up by the millions without the knowledge of the people who posted them, categorized by age, gender, skin tone and dozens of other metrics, and shared with researchers at universities and companies. As the algorithms get more advanced -- meaning they are better able to identify women and people of color, a task they have historically struggled with -- legal experts and civil rights advocates are sounding the alarm on researchers' use of photos of ordinary people.
Is the age of intelligent machines bringing gender equality nearer or turning back the clock? Gemma Lloyd, co-founder of Work180, an Australia-based international jobs network for women, is proud of her engineering team in which women outnumber men. She just wishes there were more female engineers generally. "If there aren't enough women in the mix, the products won't be as good as they could be, and they certainly won't be what society wants -- because women are 50 per cent of society," she says. The lack of female technologists -- only 22 per cent of artificial intelligence professionals globally are female, for instance -- is a frustration for many gend er equality advocates.
Every sibling relationship has its clichés. In the Microsoft family of social-learning chatbots, the contrasts between Tay, the infamous, sex-crazed neo-Nazi, and her younger sister Zo, your teenage BFF with #friendgoals, are downright Shakespearean. When Microsoft released Tay on Twitter in 2016, an organized trolling effort took advantage of her social-learning abilities and immediately flooded the bot with alt-right slurs and slogans. Tay copied their messages and spewed them back out, forcing Microsoft to take her offline after only 16 hours and apologize. A few months after Tay's disastrous debut, Microsoft quietly released Zo, a second English-language chatbot available on Messenger, Kik, Skype, Twitter, and Groupme.
Too often we're told that if Australia is to compete globally in developing AI products, Australian researchers and companies must not be fettered by human rights concerns, because other countries certainly aren't. China, for example, is investing heavily in AI technology such as facial recognition to support its "social credit score" system, which involves conducting precise and determinative surveillance of its citizens. In the context of a global AI arms race, it is argued, Australia can't compete with one arm tied behind its back.
As artificial intelligence brings nations high speed facial recognition capabilities, surveillance societies are rising across nations: blurring the lines between privacy and security. This is due to the enormous scale of changes that are enabled by significant advancements in technology: digital imaging, high speed processing, skin texture analysis, thermal cameras, machine learning, 3D sensors, speech recognition, mood recognition and more. These advancements break technical barriers allowing the extensive collection, recording, storage, analysis and application of digital data and information. Moreover, the explosion in processing power allows powerful computers utilizing artificial intelligence and machine learning to execute facial recognition capabilities with almost perfect accuracy. Today, facial recognition technology can be used to not only identify individuals but also uncover additional personally identifiable information (PII), such as photos, blog posts, social networking profiles and internet behavior all through facial features alone.
Artificial Intelligence (AI) researchers may look back on 2018 as the year that human rights became crucial to advancing the technology. Over the last six months of the year, a slew of reports focused on "artificial intelligence and human rights" were published by a variety of well-respected entities, including the most recent report of the UN Special Rapporteur on freedom of opinion and expression, Berkman Klein's report on "Artificial Intelligence & Human Rights: Opportunities & Risks", Access Now's "Human Rights in the Age of Artificial Intelligence" report, The Council of Europe's Draft Recommendation of the Committee of Ministers to member States on human rights impacts of algorithmic systems, and Business for Social Responsibility's, "Artificial Intelligence: A Rights-Based Blueprint for Business" series. Earlier in the year, I was asked to help kick off a workshop organized by Data & Society on the same topic and I wrote this post based on the remarks I prepared for that conference, supplemented by a few takeaways from recent reports. I come to this issue as a trained lawyer who spent the last decade working on human rights, with a special focus on the issues of "business and human rights" and human rights online. I have seen how the international human rights (IHR) framework can enable better understanding and contestation of human rights norms, monitor and mitigate the risk of human rights abuses, generate input and output legitimacy, and facilitate trust and coalition-building.
BOGOTA, Colombia – Human Rights Watch is denouncing Colombia's government for appointing at least nine officers to key army positions despite credible evidence implicating them in serious human rights violations during the country's long civil conflict. The human rights organization released a report Wednesday condemning the government of President Ivan Duque for promoting Gen. Nicasio de Jesus Martinez Espinel as army chief and promoting eight other officers linked to abuses. The men are "credibly implicated" in what is known as the "false positive" scandal, in which security forces killed several thousand civilians during the height of the military's offensive against leftist guerrillas and counted them as rebels to inflate combat deaths to obtain coveted bonuses, the group said. "The Colombian government should be investigating officers credibly linked to extrajudicial executions, not appointing them to the army's top command positions," said Jose Miguel Vivanco, Americas director for Human Rights Watch. He said their appointments send a troubling message to troops: "That engaging in these abuses may not be an obstacle for career success."
Every once in a while, I'll open something up that hasn't seen the light of day for a while. It always yields discoveries, forgotten memories and much more. Sometimes I'll open something up because it needs to be cleaned or fixed, as was the case recently with my father's 1955 Seeberg jukebox, long sitting idle in the basement. As with anything that is aging and has moving parts, it needed some care, my father long having left this life and his jukebox behind. A rare quiet hour with a piece of your childhood can reveal much.
Reported by the New York Times, new tests of facial recognition technology suggest that Amazon's system has more difficulty identifying the gender of female and darker-skinned faces compared with similar facial recognition technology services provided by IBM and Microsoft. Amazon's Rekognition is a software application that sets out to identify specific facial features by comparing similarities in a large volume of photographs. The study is of importance, given that Amazon has been marketing its facial recognition technology to police departments and federal agencies, presenting the technology as an additional tool to aid those tasked with law enforcement to identify suspects more rapidly. This tendency has been challenged by the American Civil Liberties Union (See: "Orlando begins testing Amazon's facial recognition in public"). The new study comes from Inioluwa Deborah Raji (University of Toronto) and Joy Buolamwini (Massachusetts Institute of Technology) and it is titled "Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products."