Each Fourth of July for the past five years I've written about AI with the potential to positively impact democratic societies. I return to this question with the hope of shining a light on technology that can strengthen communities, protect privacy and freedoms, or otherwise support the public good. This series is grounded in the principle that artificial intelligence can is capable of not just value extraction, but individual and societal empowerment. While AI solutions often propagate bias, they can also be used to detect that bias. As Dr. Safiya Noble has pointed out, artificial intelligence is one of the critical human rights issues of our lifetimes.
Moscow City Hall has been instructed to determine the conditions, requirements and procedure for the development, creation, introduction and implementation of artificial intelligence technologies, as well as the cases and procedures for using the results of the application of artificial intelligence. It is expected that large IT companies using artificial intelligence in the field of medicine, urban infrastructure, face recognition and other uses will take part in the experiment. The Law separately outlines certain provisions relating to the storage and processing of personal data that will be obtained during the experiment. As a result, the Law makes it possible to use the previously anonymised personal data of individuals participating in the experiment to increase the effectiveness of the state or municipal government. However, the Law specifically establishes that such personal data can only be transferred to participants in the experiment and must be stored in Moscow.
Amazon may have banned police from using its facial recognition technology, but a new report shows the tech giant is providing thousands of departments with video and audio footage from Ring. Electronic Frontier Foundation, a nonprofit that defends civil liberties, found over 1,400 agencies are working with the Amazon-owned company and hundreds of them have'deadly histories.' Data from sources reveals half of the agencies had at least one fatal encounter in the last five years and altogether are responsible for a third of fatal encounters nationwide. These departments are also involved with the deaths of Breonna Taylor, Alton Sterling, Botham Jean, Antonio Valenzuela, Michael Ramos and Sean Monterrosa. Electronic Frontier Foundation, a nonprofit that defends civil liberties, found over 1,400 agencies are working with Amazon-owned Ring and hundreds of them have'deadly histories' DailyMail.com
With the emergence of incredibly powerful machine learning technologies, such as Deepfakes and Generative Neural Networks, it is all the easier now to spread false information. In this article, we will briefly introduce deepfakes and generative neural networks, as well as a few ways to spot AI-generated content and protect yourself against misinformation. I have many elderly relatives and some middle-aged relatives that just aren't well-versed with technology. Some of these people believe nearly anything they read, or at least believe it enough to share it on social media. While that doesn't sound so bad, it depends on what you are sharing.
On June 30, US Secretary of State Mike Pompeo's address to the UN Security Council calling for an arms embargo on Iran to be extended was expected to dominate the international news agenda. However, Iran's judiciary stole the morning's headlines by issuing an arrest warrant for Donald Trump the day before. Tehran prosecutor Ali Alqasimehr said on Monday that Trump, along with more than 30 others accused of involvement in the January 3 drone attack that killed Iran's top general, Qassem Soleimani, face "murder and terrorism charges". The prosecutor added that Tehran asked Interpol for help in detaining the US president. The same day, the US special envoy for Iran, Brian Hook, denounced the warrant as a "propaganda stunt" at a press conference in the Saudi capital, Riyadh.
UK regulators have criticized a browser deal between Apple and Google as a "significant" barrier to search engine competition. The CMA claims that current laws are not enough to properly manage and regulate large technology companies and their platforms, such as Apple, Google, or Facebook, and in particular, deals between different entities can become barriers to innovation and competition. Within the report, the agency highlights a deal made in 2019 between Google and Apple, in which the former paid roughly £1.2 billion ($1.5bn) to become the default search engine on a variety of mobile devices and systems in the United Kingdom alone. According to the regulators, the iPhone and iPad maker received the lion's share of this payment. "Rival search engines to Google that we spoke to highlighted these default payments as one of the most significant factors inhibiting competition in the search market," the CMA says.
For better or worse, there's a good chance your current love life owes something to automation. Even if you're just hooking up with the occasional Tinder fling (which if you are, no judgment), you're still turning to Tinder's black-box algorithms to pick out that fling for you before turning to more black-box algorithms to pick out the best dingy bar to meet them at before turning to more black-box algorithms to figure out what, exactly, should be your date night lewk. If things get serious further down the line, you might turn to another black-box algorithm to plan your entire damn wedding for you. And if it turns out you got married for all the wrong reasons, it turns out there's another set of black boxes you can plug your details into to settle the details of your divorce. Known as "amica," the service was rolled out yesterday by the Australian government as a way to let soon-to-be-exes "make parenting arrangements" and "divide their money and property" without having to go through the hassle of hiring a lawyer to do the heavy lifting.
This is an updated version. Turing Award Winner and Facebook Chief AI Scientist Yann LeCun has announced his exit from popular social networking platform Twitter after getting involved in a long and often acrimonious dispute regarding racial biases in AI. Unlike most other artificial intelligence researchers, LeCun has often aired his political views on social media platforms, and has previously engaged in public feuds with colleagues such as Gary Marcus. This time however LeCun's penchant for debate saw him run afoul of what he termed "the linguistic codes of modern social justice." It all started on June 20 with a tweet regarding the new Duke University PULSE AI photo recreation model that had depixelated a low-resolution input image of Barack Obama into a photo of a white male.
Organising ethical debates has long been an efficient way for industry to delay and avoid hard regulation. Europe now needs strong, enforceable rights for its citizens, writes Green MEP Alexandra Geese. If the rules are too weak, there is a too great a risk that our rights and freedoms will be undermined: This currently applies to all applications of artificial intelligence, which up to now have only been based on non-binding ethical principles and values. In this legislation, Europe has the chance to adopt a legal framework for AI with clear rules. We need strong instruments to protect our fundamental rights and democracy.
A European privacy body said it "has doubts" that using facial recognition technology developed by U.S. company Clearview AI is legal in the EU. Clearview AI allows users to link facial images of an individual to a database of more than 3 billion pictures scraped from social media and other sources. According to media reports, over 600 law enforcement agencies worldwide are using the controversial app. But in a statement Wednesday, the European Data Protection Board said that "the use of a service such as Clearview AI by law enforcement authorities in the European Union would, as it stands, likely not be consistent with the EU data protection regime." The body issued the statement after MEPs raised questions regarding the use of the company's software.