Goto

Collaborating Authors

 personal privacy


Multi-P$^2$A: A Multi-perspective Benchmark on Privacy Assessment for Large Vision-Language Models

Zhang, Jie, Cao, Xiangkui, Han, Zhouyu, Shan, Shiguang, Chen, Xilin

arXiv.org Artificial Intelligence

Large Vision-Language Models (LVLMs) exhibit impressive potential across various tasks but also face significant privacy risks, limiting their practical applications. Current researches on privacy assessment for LVLMs is limited in scope, with gaps in both assessment dimensions and privacy categories. To bridge this gap, we propose Multi-P$^2$A, a comprehensive benchmark for evaluating the privacy preservation capabilities of LVLMs in terms of privacy awareness and leakage. Privacy awareness measures the model's ability to recognize the privacy sensitivity of input data, while privacy leakage assesses the risk of the model unintentionally disclosing privacy information in its output. We design a range of sub-tasks to thoroughly evaluate the model's privacy protection offered by LVLMs. Multi-P$^2$A covers 26 categories of personal privacy, 15 categories of trade secrets, and 18 categories of state secrets, totaling 31,962 samples. Based on Multi-P$^2$A, we evaluate the privacy preservation capabilities of 21 open-source and 2 closed-source LVLMs. Our results reveal that current LVLMs generally pose a high risk of facilitating privacy breaches, with vulnerabilities varying across personal privacy, trade secret, and state secret.


What are the upcoming policies that will shape AI – and are policymakers up to the task?

#artificialintelligence

As vice president and director of governance studies at the Brookings Institution, and a senior fellow at its Center for Technology Innovation, Darrell M. West spends a lot of time thinking about the intersection of policy and emerging tech. In his recent book, Turning Point: Policymaking in the Era of Artificial Intelligence, co-authored with Brookings President John R. Allen, West looks at AI use cases – "from self-driving cars to e-commerce algorithms that seem to know what you want to buy before you do" – and assesses where they're headed and how they will be shaped by policy decisions made today. The key challenge – not least in healthcare, where patient safety is paramount – is to devise regulatory guardrails that maximize the benefits of AI and machine learning and minimize their potentially dangerous downsides. In the book, West and Allen offer a series of recommendations – bolstering governmental oversight, creating new specialized advisory boards at federal agencies, third-party auditing to sniff out algorithmic bias and more. At the upcoming HIMSS Machine Learning & AI for Healthcare event, West will offer a presentation titled "The Latest Regulatory Developments Impacting Machine Learning and AI in Healthcare," where he'll explore potential new policy shifts around clinical uses of artificial intelligence: algorithmic bias, remote patient monitoring, patient safety, fitness trackers and more.


Facebook to delete users' facial-recognition data after privacy complaints

NPR Technology

Facebook says it will delete facial recognition data on 1 billion people as it backs away from the technology. Critics had called it a danger to personal privacy. Facebook says it will delete facial recognition data on 1 billion people as it backs away from the technology. Critics had called it a danger to personal privacy. Providence, R.I. -- Facebook said it will shut down its face-recognition system and delete the faceprints of more than 1 billion people.


Kagan: Defend yourself against loss of personal privacy

#artificialintelligence

Last week the Washington Post wrote about the growing concern privacy experts are having over AI and cameras in self powered, robot vacuum cleaners like iRobot Roomba or Shark ION Robot. The point is, if these devices can see and recognize dog poop and avoid it, what other personal and private conversations can they see and hear? Remember, if they can see and hear, they can both record and transmit the data over the Internet. Now consider all the other amazing new tech we use every day. This means your personal privacy is gone forever.


Clearview AI Has New Tools to Identify You in Photos

WIRED

Clearview AI has stoked controversy by scraping the web for photos and applying facial recognition to give police and others an unprecedented ability to peer into our lives. Now the company's CEO wants to use artificial intelligence to make Clearview's surveillance tool even more powerful. It may make it more dangerous and error-prone as well. Clearview has collected billions of photos from across websites that include Facebook, Instagram, and Twitter and uses AI to identify a particular person in images. Police and government agents have used the company's face database to help identify suspects in photos by tying them to online profiles.


Discourse on the Philosophy of Artificial Intelligence and the Future Role of Humanity

#artificialintelligence

Artificial intelligence can be defined as "the ability of an artifact to imitate intelligent human behavior" or, more simply, the intelligence exhibited by a computer or machine that enables it to perform tasks that appear intelligent to human observers (Russell & Norvig 2010). AI can be broken down into two different categories: Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI), which are defined as follows: ANI refers to the ability of a machine or computer program to perform one particular task at an extremely high level or learn how to perform this task faster than any other machine. The most famous example of ANI is Deep Blue, which played chess against Garry Kasparov in 1997. AGI refers to the idea that a computer or machine would one day have the ability to exhibit intelligent behavior equal to that of humans across any given field such as language, motor skills, and social interaction; this would be similar in scope and complexity as natural intelligence. A typical example given for AGI is an educated seven-year-old child.


Three Risks of Artificial Intelligence

#artificialintelligence

Artificial intelligence (AI) has been a go-to technology for many people throughout the COVID-19 pandemic. AI has helped businesses solve many issues during this highly disruptive time, from helping to improve the customer experience and detecting fraud to automating work processes. That is helpful, but businesses also need to be aware of the risks of using AI. AI is fed by data that are designed to protect personal privacy, but that is not always what happens. This reality has been highlighted by a number of reported data breaches, some of which have targeted large businesses, like Twitter and Magellan Health.


China's Privacy Challenges with AI and Mobile Apps

#artificialintelligence

China's rapidly growing tech economy is now facing some serious questions about the trade-offs involved in the widespread adoption of emerging technologies such as AI. In fact, China's Ministry of Science and Technology is now leading the debate over the relative benefits and drawbacks of artificial intelligence, with at least some recognition that certain AI applications – such as facial recognition technology – might have some very negative implications for personal privacy. At the same time, other regulatory authorities within China – including the Cyberspace Administration of China – are now taking a closer look at how popular consumer technologies (including mobile apps) might also be going too far when it comes to collecting, using and sharing user data. For now, the most high-profile emerging technology within China is artificial intelligence (AI), which is being embraced much more quickly and widely than in the West. For example, Chinese law enforcement authorities are using AI-powered facial recognition technologies to crack down on crime and terrorism, while urban planners and other policymakers are embracing AI as a way to come up with more efficient healthcare, education and transportation solutions.


Using Camera Data Effectively Without Facial Recognition - insideBIGDATA

#artificialintelligence

Expanded use of privacy-invasive facial recognition technology is a hot-button issue. But there are solutions that can preserve and improve the value of cameras in the enterprise without the drawbacks of privacy-invasive facial recognition. Escalating push-back against facial recognition technology that is perceived as privacy-violating, inaccurate and biased, has sparked a search for solutions to this question: Can camera data be used effectively without facial recognition? For example, Sixgill, LLC, a leader in data automation and authenticity for Internet of Everything (IoE) applications, announced just such a solution that preserves and improves the value of cameras in the enterprise without the liabilities of privacy-violating facial recognition. Sense Vision – a newly launched capability in Sixgill's foundational product, Sense – delivers real-time video data automation, assisted by machine learning (ML).


MIT researchers have taught their AI to see through solid walls – Fanatical Futurist by International Keynote Speaker Matthew Griffin

#artificialintelligence

Recently we've seen camera developments from both China and MIT that help us see and take photos around corners, but now you don't need exotic infra red, radar or wifi to spot people through walls, apparently all you need are some easily detectable wireless signals and a dash of AI. Following on from another piece of research that let MIT researchers read peoples emotions using just the WiFi signals from their home routers, another team of researchers at MIT have developed a system, called RF-Pose, where RF stands for Radio Frequency, that uses a neural network to teach RF equipped devices to sense people's movement and postures behind obstacles, and it could be used to help people keep track of elderly relatives in their homes, help gamers turn the house into a giant battleground, and help rescuers rescue people. The team trained their AI to recognise human motion in RF by showing it examples of both on camera movement and signals reflected from people's bodies, helping it understand how the reflections correlate to a given posture. From there the AI could use wireless alone to estimate someone's movements and represent them using stick figures. The scientists mainly see their invention as useful for health care, for the moment anyway, where it could be used to track the development of diseases like Multiple Sclerosis and Parkinson's disease.