Goto

Collaborating Authors

commissioner


UK's ICO warns over 'Big Data' surveillance threat of live facial recognition in public – TechCrunch

#artificialintelligence

The UK's chief data protection regulator has warned over reckless and inappropriate use of live facial recognition (LFR) in public places. Publishing an opinion today on the use of this biometric surveillance in public -- to set out what is dubbed as the "rules of engagement" -- the information commissioner, Elizabeth Denham, also noted that a number of investigations already undertaken by her office into planned applications of the tech have found problems in all cases. "I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively or even recklessly. When sensitive personal data is collected on a mass scale without people's knowledge, choice or control, the impacts could be significant," she warned in a blog post. "Uses we've seen included addressing public safety concerns and creating biometric profiles to target people with personalised advertising. "It is telling that none of the organisations involved in our completed investigations were able to fully justify the processing and, of those systems that went live, none were fully compliant with the requirements of data protection law.


HRC calls for an AI Safety Commissioner - InnovationAus

#artificialintelligence

The federal government should establish an AI Safety Commissioner and halt the use of facial recognition and algorithms in important decision-making until adequate protections are in place, the Australian Human Rights Commission has concluded after a three-year investigation. The Australian Human Rights Commission's (AHRC) report on Human Rights and Technology was tabled in Parliament on Thursday afternoon, with 38 recommendations to the government on ensuring human rights are upheld in the laws, policies, funding and education on artificial intelligence. Human Rights Commissioner Ed Santow has urged local, state, territory and federal governments to put on hold the use of facial recognition and AI in decision-making that has a significant impact on individuals. This moratorium should be until adequate legislation is in place that regulates the use of these technologies and ensures human rights are protected. The use of automation and algorithms in government decision-making should also be paused until a range of protections and transparency measures are in place, Mr Santow said in the report.


Australia's eSafety and the uphill battle of regulating the ever-changing online realm

ZDNet

Australia's eSafety Commissioner is set to receive sweeping new powers like the ability to order the removal of material that seriously harms adults, with the looming passage of the Online Safety Act. Tech firms, as well as experts and civil liberties groups, have taken issue with the Act, such as with its rushed nature, the harm it can cause to the adult industry, and the overbearing powers it affords to eSafety, as some examples. Current eSafety Commissioner Julie Inman Grant has even previously admitted that details of how the measures legislated in the Online Safety Bill 2021 would be overseen are still being worked out. The Bill contains six priority areas, including an adult cyber abuse scheme to remove material that seriously harms adults; an image-based abuse scheme to remove intimate images that have been shared without consent; Basic Online Safety Expectations (BOSE) for the eSafety Commissioner to hold services accountable; and an online content scheme for the removal of "harmful" material through take-down powers. Appearing before the Parliamentary Joint Committee on Intelligence and Security as part of its inquiry into extremist movements and radicalism in Australia, Inman Grant said while the threshold is quite high in the new powers around take-down requests, it will give her agency a fair amount of leeway to look at intersectional factors, such as the intent behind the post.


This Has Just Become A Big Week For AI Regulation - AI Summary

#artificialintelligence

But it can step in when companies misrepresent the capabilities of a product they are selling, which means firms that claim their facial recognition systems, predictive policing algorithms or healthcare tools are not biased may now be in the line of fire. "Where they do have power, they have enormous power," says Calo. In the blog post, the FTC warns vendors that claims about AI must be "truthful, non-deceptive, and backed up by evidence." The result may be deception, discrimination--and an FTC law enforcement action." The FTC action has bipartisan support in the Senate, where commissioners were asked yesterday what more they could be doing and what they needed to do it. But it can step in when companies misrepresent the capabilities of a product they are selling, which means firms that claim their facial recognition systems, predictive policing algorithms or healthcare tools are not biased may now be in the line of fire. "Where they do have power, they have enormous power," says Calo. In the blog post, the FTC warns vendors that claims about AI must be "truthful, non-deceptive, and backed up by evidence." The result may be deception, discrimination--and an FTC law enforcement action."


Facial recognition tech is supporting mass surveillance. It's time for a ban, say privacy campaigners

ZDNet

The letter urges the Commissioner to support enhanced protection for fundamental human rights. A group of 51 digital rights organizations has called on the European Commission to impose a complete ban on the use of facial recognition technologies for mass surveillance – with no exceptions allowed. Comprising activist groups from across the continent, such as Big Brother Watch UK, AlgorithmWatch and the European Digital Society, the call was chaperoned by advocacy network the European Digital Rights (EDRi) in the form of an open letter to the European commissioner for Justice, Didier Reynders. It comes just weeks before the Commission releases much-awaited new rules on the ethical use of artificial intelligence on the continent on 21 April. The letter urges the Commissioner to support enhanced protection for fundamental human rights in the upcoming laws, in particular in relation to facial recognition and other biometric technologies, when these tools are used in public spaces to carry out mass surveillance.


AI Weekly: Facebook, Google, and the tension between profits and fairness

#artificialintelligence

This week, we learned a lot more about the inner workings of AI fairness and ethics operations at Facebook and Google and how things have gone wrong. On Monday, a Google employee group wrote a letter asking Congress and state lawmakers to pass legislation to protect AI ethics whistleblowers. That letter cites VentureBeat reporting about the potential policy outcomes of Google firing former Ethical AI team co-lead Timnit Gebru. It also cites research by UC Berkeley law professor Sonia Katyal, who told VentureBeat, "What we should be concerned about is a world where all of the most talented researchers like [Gebru] get hired at these places and then effectively muzzled from speaking. And when that happens, whistleblower protections become essential."


Clearview AI sued in California over 'most dangerous' facial recognition database

#artificialintelligence

Civil liberties activists are suing a company that provides facial recognition services to law enforcement agencies and private companies around the world, contending that Clearview AI illegally stockpiled data on 3 billion people without their knowledge or permission. The lawsuit, filed in Alameda County Superior Court in the San Francisco bay area, says the New York company violates California's constitution and seeks a court order to bar it from collecting biometric information in California and requiring it to delete data on Californians. The lawsuit says the company has built "the most dangerous" facial recognition database in the nation, has fielded requests from more than 2,000 law enforcement agencies and private companies and has amassed a database nearly seven times larger than the FBI's. Separately, the Chicago Police Department stopped using the New York company's software last year after Clearview AI was sued in Cook County by the American Civil Liberties Union. The California lawsuit was filed by four activists and the groups Mijente and Norcal Resist.


Fear itself is the real threat to democracy, not tall tales of Chinese AI John Naughton

The Guardian

This week the American National Security Commission on artificial intelligence released its final report. Cursory inspection of its 756 pages suggests that it's just another standard product of the military-industrial complex that so worried President Eisenhower at the end of his term of office. On closer examination, however, it turns out to be a set of case notes on a tragic case of what we psychologists call "hegemonic anxiety" – the fear of losing global dominance. The report is the work of 15 bigwigs, led by Dr Eric Schmidt, the former CEO of Alphabet (and before that the adult supervisor imposed by venture capitalists on the young co-founders of Google). Of the 15 members of the commission only four are female.


AI commission sees 'extraordinary' support to stand up tech-focused service academy

#artificialintelligence

Artificial intelligence tools will soon become the "weapons of first resort," and will accelerate the damage caused by cyber attacks and disinformation campaigns, former Deputy Defense Secretary Robert Work said Monday. To stay top of this emerging threat, Work, speaking as the vice-chairman of the National Security Commission on AI, is calling on the federal government to add senior AI advisors to the top ranks of the White House, Defense Department and intelligence community. The commission, in its final report to Congress and President Joe Biden, recommended standing up a Technology Competitiveness Council within the White House, modeled after the National Security Council, that would prepare for the threats and opportunities of AI. The report also recommended creating a Digital Service Academy, modeled after the five current military service academies, that would "grow tech talent with the same seriousness of purpose that we grow military officers," and train current and future federal employees. Insight by Kodak Alaris: Practitioners provide insight into how states and the IT industry are dealing with Real ID in this exclusive executive briefing.


National Security Commission on Artificial Intelligence issues report on how to maintain U.S. dominance

#artificialintelligence

The National Security Commission on Artificial Intelligence today released its report today with dozens of recommendations for President Joe Biden, Congress, and business and government leaders. China, the group said, represents the first challenge to U.S. technological dominance that threatens economic and military power for the first time since the end of World War II. The commissioners call for a $40 billion investment to expand and democratize AI research and development a "modest down payment for future breakthroughs", and encourage an attitude toward investment in innovation from policymakers akin that which led to building the interstate highway system in the 1950s. The report recommends several changes that could shape business, tech, and national security. For example, amid a global shortage of semiconductors, the report calls for the United States to stay "two generations ahead" of China in semiconductor manufacturing and suggests a hefty tax credit for semiconductor manufacturers.