Goto

Collaborating Authors

commissioner


ICO launches guidance on AI and data protection

#artificialintelligence

The Information Commissioner's Office (ICO) has published an 80-page guidance document for companies and other organisations about using artificial intelligence (AI) in line with data protection principles. The guidance is the culmination of two years research and consultation by Reuben Binns, an associate professor in the department of Computer Science at the University of Oxford, and the ICO's AI team. The guidance covers what the ICO thinks is "best practice for data protection-compliant AI, as well as how we interpret data protection law as it applies to AI systems that process personal data. The guidance is not a statutory code. It contains advice on how to interpret relevant law as it applies to AI, and recommendations on good practice for organisational and technical measures to mitigate the risks to individuals that AI may cause or exacerbate".


National Security Commission on AI recommends digital reserve corps and academy - FedScoop

#artificialintelligence

The National Security Commission on Artificial Intelligence recommended Monday the U.S. should create a digital reserve corps and digital service academy to increase the pipeline of tech-savvy workers into the public sector. The recommendations come as part of the commission's second-quarter report for 2020, which included several other lines of effort from ethics, diplomacy, protection, investment and the application of AI. The report has not been published but was discussed in a live-streamed video call. The commission also recommended expanding scholarship-for-service systems that would offer more financial help for those looking to advance technical skills in exchange for time in government. The digital academy would be fully accredited and independent from the government, with students doing government and private sector internships during breaks, commissioners said Monday.


EU antitrust lawmakers kick off IoT deep dive to follow the data flows – TechCrunch

#artificialintelligence

The potential for the Internet of Things to lead to distortion in market competition is troubling European Union lawmakers who have today kicked off a sectoral inquiry. They're aiming to gather data from hundreds of companies operating in the smart home and connected device space -- via some 400 questionnaires, sent to companies big and small across Europe, Asia and the US -- using the intel gleaned to feed a public consultation slated for early next year when the Commission will also publish a preliminary report. In a statement on the launch of the sectoral inquiry today, the European Union's competition commissioner, Margrethe Vestager, said the risks to competition and open markets linked to the data collection capabilities of connected devices and voice assistants are clear. The aim of the exercise is therefore to get ahead of any data-fuelled competition risks in the space before they lead to irreversible market distortion. "One of the key issues here is data. Voice assistants and smart devices can collect a vast amount of data about our habits. And there's a risk that big companies could misuse the data collected through such devices, to cement their position in the market against the challenges of competition. They might even use their knowledge of how we access other services to enter the market for those services and take it over," said Vestager.


Siri, Alexa and Google Assistant in the spotlight as Europe launches Internet of Things investigation

ZDNet

The EU competition watchdog is taking another look at whether big tech is helping itself to too large a slice of the digital market, this time in the space of connected devices. The organization's commissioner Margrethe Vestager announced the launch of a sector probe to make sure that the companies behind smart products and digital assistants aren't building monopolies that could threaten consumer rights in the EU. While the technologies have great potential, the commissioner warned that they should be deployed carefully. "We'll only see the full benefits – low prices, wide choice, innovative products and services – if the markets for these devices stay open and competitive. And the trouble is that competition in digital markets can be fragile," said Vestager.


Facial recognition company that scrapes social media sites to be investigated by UK and Australia

The Independent - Tech

The UK's Information Commissioner's Office and the Australian Information Commissioner have announced a joint investigation into Clearview AI. The data watchdogs will focus "on the company's use of'scraped' data and biometrics of individuals" they said in a statement. The investigation follows a similar announcement by the Office of the Privacy Commissioner of Canada, which also has opened an investigation into Clearview AI. "The joint investigation was initiated in the wake of media reports which stated that Clearview AI was using its technology to collect images and make facial recognition available to law enforcement in the context of investigations" the Canadian statement says. "Reports have also indicated the US-based company provides services in a number of countries to a broad range of organizations, including retailers, financial institutions and various government institutions." The company had advised the privacy protection authorities that, in response to their investigation, it would be withdrawing its services from Canada.


IBM quits facial-recognition market over police racial-profiling concerns

The Guardian

IBM is pulling out of the facial recognition market and is calling for "a national dialogue" on the technology's use in law enforcement. The abrupt about-face comes as technology companies are facing increased scrutiny over their contracts with police amid violent crackdowns on peaceful protest across America. In a public letter to Congress, IBM chief executive, Arvind Krishna, explained the company's decision to back out of the business, and declared an intention "to work with Congress in pursuit of justice and racial equity, focused initially in three key policy areas: police reform, responsible use of technology, and broadening skills and educational opportunities." The company, Krishna said, "no longer offers general purpose IBM facial recognition or analysis software. "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and ...


Face masks prompt London police to consider pause in rollout of facial recognition cameras

ZDNet

The rollout of facial recognition cameras in London is facing disruption as citizens are now using face coverings that could potentially incapacitate the technology. The United Kingdom has been a keen adopter of surveillance technology including facial recognition cameras in recent years, despite concerns that widespread spying erodes citizen rights to privacy. Last year, the Information Commissioner's Office (ICO) launched an investigation into a trial of facial recognition cameras installed at King's Cross, a busy underground and overground train station, based on claims that commuters and passers-by were being surveilled without explicit consent. At the time, UK Information Commissioner Elizabeth Denham called the scheme "a potential threat to privacy that should concern us all." The Metropolitan Police has also launched its own trials at busy hotspots in the capital.


AI For National Security And The Challenge Of China

#artificialintelligence

This article has been adapted from the podcast, Eye on AI. In 2017, China announced its goal to become the world leader in AI by 2030. The US responded by creating a commission to review America's competitive position and to advise Congress on what steps are needed to maintain US leadership in this important field. Former Google chief executive Eric Schmidt and former Deputy Defense Secretary Bob Work were chosen from among fifteen appointed commissioners to lead the work. Earlier this month, the commission issued its first set of recommendations to Congress.


Facial recognition is in London. So how should we regulate it?

#artificialintelligence

As the first step on the road to a powerful, high tech surveillance apparatus, it was a little underwhelming: a blue van topped by almost comically intrusive cameras, a few police officers staring intently but ineffectually at their smartphones and a lot of bemused shoppers. As unimpressive as the moment may have been, however, the decision by London's Metropolitan Police to expand its use of live facial recognition (LFR) marks a significant shift in the debate over privacy, security and surveillance in public spaces. Despite dismal accuracy results in earlier trials, the Metropolitan Police Service (MPS) has announced they are pushing ahead with the roll-out of LFR at locations across London. MPS say that cameras will be focused on a small targeted area "where intelligence suggests [they] are most likely to locate serious offenders," and will match faces against a database of individuals wanted by police. The cameras will be accompanied by clear signposting and officers handing out leaflets (it is unclear why MPS thinks that serious offenders would choose to walk through an area full of police officers handing out leaflets to passersby).


Met police chief: facial recognition technology critics are ill-informed

The Guardian

The Metropolitan police commissioner, Cressida Dick, has attacked critics of facial recognition technology for using arguments she has claimed are highly inaccurate and ill-informed. The Met began operational use of the technology earlier this month despite concerns raised about its accuracy and privacy implications by civil liberties groups, including Amnesty International UK, Liberty and Big Brother Watch (BBW). On Monday, speaking at the Royal United Services Institute (Rusi) in central London, which has just launched its own report expressing reservations about the rollout of new technology in policing, Dick launched an impassioned defence of its use. "I and others have been making the case for the proportionate use of tech in policing, but right now the loudest voices in the debate seem to be the critics, sometimes highly incorrect and/or highly ill-informed," she said. "And I would say it is for the critics to justify to victims of crimes why police shouldn't use tech lawfully and proportionately to catch criminals."