Deepfakes have made their way into the radar of much of the First World. As with many technology phenomena, deepfakes have their origins in pornography – editing (the Reddit page that originally popularized deepfakes was banned in early 2018). In April of this year, I was asked by UNICRI (the crime and justice wing of the UN) to present the risks and opportunities of deepfakes and programmatically generated content at United Nations headquarters for a convening titled: Artificial Intelligence and Robotics: Reshaping the Future of Crime, Terrorism, and Security. Instead of speaking about the topic, we decided it would be better to showcase the technology to the UN, IGO, and law enforcement leaders attending the event. So we took a video of UNICRI Director Ms. Bettina Tucci Bartsiotas, and created a deepfake, altering her words and statements by using a model of her face on another person.
On Tuesday, in an 8-1 tally, the San Francisco Board of Supervisors voted to ban the use of facial recognition software by city departments, including police. Supporters of the ban cited racial inequality in audits of facial recognition software from companies like Amazon and Microsoft, as well as dystopian surveillance happening now in China. At the core of arguments around the regulation of facial recognition software use is the question of whether a temporary moratorium should be put in place until police and governments adopt policies and standards or it should be permanently banned. Some believe facial recognition software can be used to exonerate the innocent and that more time is needed to gather information. Others, like San Francisco Supervisor Aaron Peskin, believe that even if AI systems achieve racial parity, facial recognition is a "uniquely dangerous and oppressive technology."
Recently San Francisco passed – in an 8-to-1 vote -- a ban on local agencies to use facial recognition technologies. The move is likely not to be a one-off either. Other local governments are exploring similar prohibitions, so as to deal with the potential Orwellian risks that the technology may harm people's privacy. "In the mad dash towards AI and analytics, we often turn a blind eye to their long-range societal implications which can lead to startling conclusions," said Kon Leong, who is the CEO of ZL Technologies. Yet some tech companies are getting proactive.
The U.S. Department of Defense (DoD) visited Silicon Valley Thursday to ask for ethical guidance on how the military should develop or acquire autonomous systems. The public comment meeting was held as part of a Defense Innovation Board effort to create AI ethics guidelines and recommendations for the DoD. A draft copy of the report is due out this summer. Microsoft director of ethics and society Mira Lane posed a series of questions at the event, which was held at Stanford University. She argued that AI doesn't need to be implemented the way Hollywood has envisioned it and said it is imperative to consider the impact of AI on soldiers' lives, responsible use of the technology, and the consequences of an international AI arms race.
DUBAI, UNITED ARAB EMIRATES - Saudi Arabia does not want war but will not hesitate to defend itself against Iran, a top Saudi diplomat said Sunday, after the kingdom's energy sector was targeted this past week amid heightened tensions in the Persian Gulf. On Sunday night, a rocket crashed in the Iraqi capital's heavily fortified Green Zone, landing less than a mile from the U.S. Embassy, further stoking tensions. No casualties were reported in the apparent attack. Adel al-Jubeir, the minister of state for foreign affairs, spoke a week after four oil tankers-- two of them Saudi-- were targeted in an alleged act of sabotage off the coast of the United Arab Emirates and days after Iran-allied Yemeni rebels claimed a drone attack on a Saudi oil pipeline. "The kingdom of Saudi Arabia does not want war in the region and does not strive for that … but at the same time, if the other side chooses war, the kingdom will fight this with all force and determination and it will defend itself, its citizens and its interests," al-Jubeir told reporters.
Our first article (What is AI?) highlighted that Artificial intelligence already has a huge impact on our lives. People are concerned about AI replacing jobs or being misused, with good reason. So here we take a broad look at the ethics of AI. AI is software: it's no more intrinsically good or bad than a database or website. Because AI has great power, the way we apply it is critically important.
Here's what you need to know in business news. The city's Board of Supervisors voted on Tuesday to prohibit the use of facial recognition technology within city limits. It's a somewhat symbolic move: The police there don't currently use the stuff, and the places where it is in use -- seaports and airports -- are under federal jurisdiction and therefore unaffected by the new regulation. The major television networks tried to sell their fall advertising slots in an annual pageant known as the upfronts. In a week of star-studded presentations, skits and boozy mingling, representatives of major advertisers flocked to New York to see what the networks have in store.
As lawmakers grapple with how to shape legislation dealing with artificial intelligence, the clerk of the House is developing an AI tool to automate the process of analyzing differences between bills, amendments and current laws. That's according to Robert F. Reeves, the deputy clerk of the House, who on Friday told the Select Committee on the Modernization of Congress that his office is working on an "artificial intelligence engine" that may be ready as soon as next year. The idea, Reeves said, is to offer members and staff a tool that would accurately compare legislative text. He said it's already available to Office of Legislative Counsel staffers, who then must check the accuracy with human intelligence. It's about 90 percent there, he told the panel.