Goto

Collaborating Authors

 ico



The UK's GPS Tagging of Migrants Has Been Ruled Illegal

WIRED

The way the UK government has been tagging migrants with GPS trackers is illegal, the country's privacy regulator ruled on Friday, in a rebuke to officials who have been experimenting with migrant-surveillance tech in both the UK and the US. As part of an 18-month pilot that concluded in December, the UK interior ministry, known as the Home Office, forced up to 600 people who arrived in the country without permission to wear ankle tags that continuously tracked their locations. However, that pilot broke UK data protection law because it did not properly assess the privacy intrusion of GPS tracking or give migrants clear information about the data that was being collected, the UK's Information Commissioner's Office (ICO) said today. The ruling means the Home Office has 28 days to update its policies around GPS tracking. Friday's decision also means the ICO could fine the Home Office up to 17.5 million ( 22 million) or 4 percent of its turnover--whichever is higher--if it resumes tagging people who arrive on the UK south coast in small boats from Europe.


UK regulator says Snap's AI chatbot may put kids' privacy at risk

Engadget

A UK regulator has raised concerns that Snap's AI chatbot may be putting the privacy of kids at risk. The Information Commissioner's Office (ICO), the country's privacy watchdog, issued a preliminary enforcement notice against the company over a "potential failure to properly assess the privacy risks posed by its generative AI chatbot'My AI'." Information Commissioner John Edwards said the ICO's provisional findings from its investigation indicated a "worrying failure by Snap to adequately identify and assess the privacy risks to children and other users" before rolling out My AI. The ICO noted that if Snap fails to sufficiently address its concerns, it may block the ChatGPT-powered chatbot in the UK. However, the preliminary notice doesn't necessarily mean that the ICO will take action against Snap or that the company has violated data protection laws. It will consider submissions from Snap before it makes a final decision.


UK data watchdog issues Snapchat enforcement notice over AI chatbot

The Guardian > Technology

Snapchat could face a fine of millions of pounds after the UK data watchdog issued it with a preliminary enforcement notice over the alleged failure to assess privacy risks its artificial intelligence chatbot may pose to users and particularly children. The Information Commissioner's Office (ICO) said it had provisionally found that the social media app's owner failed to "adequately identify and assess the risks" to several million UK users of My AI, including among 13- to 17-year-olds. Snapchat has 21 million monthly active users in the UK and has proved to be particularly popular among younger demographics, with the market research company Insider Intelligence estimating that 48% of users are aged 24 or under. About 18% of UK users are aged 12 to 17. "The provisional findings of our investigation suggest a worrying failure by Snap [the parent of Snapchat] to adequately identify and assess the privacy risks to children and other users before launching My AI," said John Edwards, the information commissioner. The ICO said the findings of its investigation were provisional and that Snap has until 27 October to make representations before a final decision is made about taking action. "No conclusion should be drawn at this stage that there has, in fact, been any breach of data protection law or that an enforcement notice will ultimately be issued," the ICO said.


UK data watchdog issues Snapchat enforcement notice over AI chatbot

The Guardian

Snapchat could face a fine of millions of pounds after the UK data watchdog issued it with a preliminary enforcement notice over the failure to assess privacy risks its artificial intelligence chatbot may pose to users and particularly children. The Information Commissioner's Office (ICO) said it had provisionally found that the social media app's owner failed to "adequately identify and assess the risks" to several million UK users of My AI, including among 13- to 17-year-olds. Snapchat has 21 million monthly active users in the UK and has proved to be particularly popular among younger demographics, with the market research company Insider Intelligence estimating that 48% of users are aged 24 or under. About 18% of UK users are aged 12 to 17. "The provisional findings of our investigation suggest a worrying failure by Snap [the parent of Snapchat] to adequately identify and assess the privacy risks to children and other users before launching My AI," said John Edwards, the information commissioner. The ICO said the findings of its investigation were provisional and that Snap has until 27 October to make representations before a final decision is made about taking action. "No conclusion should be drawn at this stage that there has, in fact, been any breach of data protection law or that an enforcement notice will ultimately be issued," the ICO said.


Mind-reading tech 'must include neurodivergent people to avoid bias'

The Guardian

Mind-reading technologies pose a "real danger" of discrimination and bias, the Information Commissioner's Office has warned, as it develops specific guidance for companies working in the sci-fi field of neurodata. The use of technology to monitor information coming directly from the brain and nervous system "will become widespread over the next decade", the ICO said, as it moves from a highly regulated medical advancement to a more general purpose technology. It is already being explored for potential applications in personal wellbeing, sport and marketing, and even for workplace monitoring. The current state-of-the-art in the field is demonstrated by individuals like Gert-Jan Oskam, a 40-year-old Dutch man who was paralysed in a cycling accident 12 years ago. In May, electronic implants in his brain gave him the ability to walk. "To many, the idea of neurotechnology conjures up images of science fiction films, but this technology is real and it is developing rapidly," said Stephen Almond, the ICO's executive director of regulatory risk.


'No excuse' for AI developers to get data privacy wrong, warns UK data regulator

#artificialintelligence

AI developers have "no excuse" for getting data privacy wrong, one of the heads of the UK's data regulator has said, warning those who don't follow the law on data protection will face consequences. The Information Commissioner's Office (ICO) enforces data protection in the UK. Speaking amid the explosion of interest in generative AI, especially Large Language Models like the one that powers OpenAI's ChatGPT, Stephen Almond, the ICO's executive director of regulatory risk, warned LLMs posed a risk for data security. Writing in a blog post, he argued it is time to "take a step back and reflect on how personal data is being used". He noted that Sam Altman, the CEO of ChatGPT creator OpenAI, has himself declared his own worries about AI advances and what they could mean.


UK watchdog warns chatbot developers over data protection laws

The Guardian

Britain's data watchdog has issued a warning to tech firms about the use of people's personal information to develop chatbots after concerns that the underlying technology is trained on large quantities of unfiltered material scraped from the web. The intervention from the Information Commissioner's Office came after its Italian counterpart temporarily banned ChatGPT over data privacy concerns. The ICO said firms developing and using chatbots must respect people's privacy when building generative artificial intelligence systems. ChatGPT, the best-known example of generative AI, is based on a system called a large language model (LLM) that is "trained" by being fed a vast trove of data culled from the internet. "There really can be no excuse for getting the privacy implications of generative AI wrong. We'll be working hard to make sure that organisations get it right," said Stephen Almond, the ICO's director of technology and innovation.


Blog: Why addressing AI-driven discrimination is so important

#artificialintelligence

For International Women's Day, Sophia Ignatidou, Group Manager for AI and Data Science, discusses how bias can arise in AI, the importance of addressing AI-driven discrimination and how we can all work towards equity in these systems. Her blog also appears on the International Women's Day website. As a woman who also became an immigrant, the concepts of equity and inclusion have always been close to my heart. My career began as a journalist, working for newspapers across both Greece and the UK. I wanted to have a more meaningful impact on the world and in the hope that a career change would enable this, I decided to study international relations and diplomacy.


Privacy watchdog asks biz to drop AI that analyzes emotions

#artificialintelligence

Companies should think twice before deploying AI-powered emotional analysis systems prone to systemic biases and other snafus, the UK's Information Commissioner's Office (ICO) warned this week. Organizations face investigation if they press on and use this sub-par technology that puts people at risk, the watchdog added. Machine-learning algorithms purporting to predict a person's moods and reactions use computer vision to track gazes, facial movements, and audio processing to gauge inflection and overall sentiment. As one might imagine, it's not necessarily accurate or fair, and there may be privacy problems handling data for training and inference. Indeed, ICO deputy commissioner Stephen Bonner said the technology isn't foolproof, leading to systemic bias, inaccuracy, or discrimination against specific social groups.