Goto

Collaborating Authors

 credit card number


Microsoft's AI Recall Tool Is Still Sucking Up Credit Card and Social Security Numbers

WIRED

On Monday, police arrested 26-year-old Luigi Mangione and charged him in the murder of UnitedHealthcare CEO Brian Thompson. Mangione's five-day run from authorities ended after he was spotted eating at a McDonald's in Altoona, Pennsylvania, about 300 miles from Manhattan, where Thompson was gunned down on the morning of December 4. Authorities say they found Mangione carrying fake IDs and a 3D-printed "ghost gun," the model of which is known as the FMDA, or "Free Men Don't Ask." Meanwhile, a flood of mysterious drone sightings across New Jersey and neighboring states caused so much havoc, it quickly gained federal attention. While many people wondered why the US military couldn't just shoot down the drones, the FBI, Department of Homeland Security, and independent experts say the drone mystery may not be much of a mystery, and the drones are probably mostly just airplanes. As for more terrestrial threats, we dove into the far-right realm of "Active Clubs," small groups of young, fitness-focused men who are steeped in extremist ideology and linked to several violent attacks. While the man who helped invent the Active Club network, Robert Rundo, was sentenced in federal court this week, Active Clubs around the world are proliferating.


Teach LLMs to Phish: Stealing Private Information from Language Models

Panda, Ashwinee, Choquette-Choo, Christopher A., Zhang, Zhengming, Yang, Yaoqing, Mittal, Prateek

arXiv.org Artificial Intelligence

When large language models are trained on private data, it can be a significant privacy risk for them to memorize and regurgitate sensitive information. In this work, we propose a new practical data extraction attack that we call "neural phishing". This attack enables an adversary to target and extract sensitive or personally identifiable information (PII), e.g., credit card numbers, from a model trained on user data with upwards of 10% attack success rates, at times, as high as 50%. Our attack assumes only that an adversary can insert as few as 10s of benign-appearing sentences into the training dataset using only vague priors on the structure of the user data. Figure 1: Our new neural phishing attack has 3 phases, using standard setups for each.


The Dark Side of AI Innovation: ChatGPT Bug Exposes User Payment Data

#artificialintelligence

In the age of technological marvels, Artificial Intelligence (AI) chatbot, ChatGPT, created by OpenAI, has been a game-changer. ChatGPT offers personalized restaurant recommendations, table bookings, travel arrangements, and even grocery orders. But beneath the awe-inspiring capabilities lies a startling revelation. A recent bug in the chatbot has exposed users' payment information, leaving thousands of subscribers vulnerable. You must be wondering who the culprit behind this is.


ChatGPT users' credit card details and personal information are LEAKED after AI was hit by a 'bug'

Daily Mail - Science & tech

OpenAI revealed Friday a bug exposed some ChatGPT users' personal information and credit card details to others using the service. The bug was spotted Monday, when a'small percentage' of users this week could see chat titles in their own conversation history that were not theirs - but the issue is much deeper than previously thought. OpenAI said Friday: 'It was possible for some users to see another active user's first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date.' The announcement stated 1.2 percent of ChatGPT Plus subscribers. The company has around 100 million users, but it is not clear how many are paying for the service.


OpenAI says a bug leaked sensitive ChatGPT user data

Engadget

OpenAI was forced to take its wildly-popular ChatGPT bot offline for emergency maintenance on Tuesday after a user was able to exploit a bug in the system to recall the titles from other users' chat histories. On Friday the company announced its initial findings from the incident. In Tuesday's incident, users posted screenshots on Reddit that their ChatGPT sidebars featured previous chat histories from other users. Only the title of the conversation, not the text itself, were visible. OpenAI, in response, took the bot offline for nearly 10 hours to investigate. The results of that investigation revealed a deeper security issue: the chat history bug may have also potentially revealed personal data from 1.2 percent of ChatGPT Plus subscribers (a $20/month enhanced access package).


Trustera: A Live Conversation Redaction System

Gouvêa, Evandro, Dadgar, Ali, Jalalvand, Shahab, Chengalvarayan, Rathi, Jayakumar, Badrinath, Price, Ryan, Ruiz, Nicholas, McGovern, Jennifer, Bangalore, Srinivas, Stern, Ben

arXiv.org Artificial Intelligence

Trustera, the first functional system that redacts personally identifiable information (PII) in real-time spoken conversations to remove agents' need to hear sensitive information while preserving the naturalness of live customer-agent conversations. As opposed to post-call redaction, audio masking starts as soon as the customer begins speaking to a PII entity. This significantly reduces the risk of PII being intercepted or stored in insecure data storage. Trustera's architecture consists of a pipeline of automatic speech recognition, natural language understanding, and a live audio redactor module. The system's goal is three-fold: redact entities that are PII, mask the audio that goes to the agent, and at the same time capture the entity, so that the captured PII can be used for a payment transaction or caller identification. Trustera is currently being used by thousands of agents to secure customers' sensitive information.


Fake It Till You Make It: Generating Realistic Synthetic Customer Datasets - KDnuggets

#artificialintelligence

Being able to create and use synthetic data in projects has become a must-have skill for data scientists. I have written in the past about using the Python library Faker for creating your own synthetic datasets. Instead of repeating anything in that article, let's treat this as the second in a series of generating synthetic data for your own data science projects. This time around, let's generate some fake customer order data. If you don't know anything about Faker, how it is used, or what you can do with it, I suggest that you check out the previous article first.


5 AWS Services That Implement AIOps Effectively

#artificialintelligence

The rise of AI has influenced almost every domain, including DevOps and SysOps. When AI is infused into tools that are used for systems management, they become more efficient and intelligent. Like other machine learning-based systems, AIOps relies on massive amounts of data. The metrics, logs and events captured from tens of thousands of machines help data scientists and ML engineers derive interesting insights through correlation. Amazon is equipped with everything it takes to design an effective AIOps strategy for its infrastructure, operations, and management services.


5 AWS Services That Implement AIOps Effectively

#artificialintelligence

The rise of AI has influenced almost every domain, including DevOps and SysOps. When AI is infused into tools that are used for systems management, they become more efficient and intelligent. Like other machine learning-based systems, AIOps relies on massive amounts of data. The metrics, logs and events captured from tens of thousands of machines help data scientists and ML engineers derive interesting insights through correlation. Amazon is equipped with everything it takes to design an effective AIOps strategy for its infrastructure, operations, and management services.


Privacy Attacks on Machine Learning Models

#artificialintelligence

Machine learning is an exciting field of new opportunities and applications; but like most technology, there are also dangers present as we expand the machine learning systems and reach within our organizations. The use of machine learning on sensitive information, such as financial data, shopping histories, conversations with friends and health-related data, has expanded in the past five years -- and so has the research on vulnerabilities within those machine learning systems. In the news and commentary today, the most common example of hacking a machine learning system is adversarial input. Adversarial input, like the video shown below, are crafted examples which fool a machine learning system into making a false prediction. In this video, a group of researchers at MIT were able to show that they can 3D print an adversarial turtle which is misclassified as a rifle from multiple angles by a computer vision system.