Goto

Collaborating Authors

 United States Government


Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models

Neural Information Processing Systems

Vision-Language Models (VLMs) excel in generating textual responses from visual inputs, but their versatility raises security concerns. This study takes the first step in exposing VLMs' susceptibility to data poisoning attacks that can manipulate responses to innocuous, everyday prompts. We introduce Shadowcast, a stealthy data poisoning attack where poison samples are visually indistinguishable from benign images with matching texts. Shadowcast demonstrates effectiveness in two attack types. The first is a traditional Label Attack, tricking VLMs into misidentifying class labels, such as confusing Donald Trump for Joe Biden.


TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation

Neural Information Processing Systems

In few-shot domain adaptation (FDA), classifiers for the target domain are trained with \emph{accessible} labeled data in the source domain (SD) and few labeled data in the target domain (TD). However, data usually contain private information in the current era, e.g., data distributed on personal phones. Thus, the private data will be leaked if we directly access data in SD to train a target-domain classifier (required by FDA methods). In this paper, to prevent privacy leakage in SD, we consider a very challenging problem setting, where the classifier for the TD has to be trained using few labeled target data and a well-trained SD classifier, named few-shot hypothesis adaptation (FHA). In FHA, we cannot access data in SD, as a result, the private information in SD will be protected well.


Questioning the Survey Responses of Large Language Models

Neural Information Processing Systems

Surveys have recently gained popularity as a tool to study large language models. By comparing models' survey responses to those of different human reference populations, researchers aim to infer the demographics, political opinions, or values best represented by current language models. In this work, we critically examine language models' survey responses on the basis of the well-established American Community Survey by the U.S. Census Bureau. Evaluating 43 different language models using de-facto standard prompting methodologies, we establish two dominant patterns. First, models' responses are governed by ordering and labeling biases, for example, towards survey responses labeled with the letter "A".


Is Trump the end of the international rules-based order?

Al Jazeera

After more than a year of Israeli bombing, tens of thousands of Palestinian deaths, and a humanitarian catastrophe in Gaza, the world was largely united in saying "enough is enough". United Nations General Assembly (UNGA) resolution 12667 in December was clear in its demand: An immediate ceasefire in Gaza. Countries as diverse as Vietnam, Zimbabwe and Colombia echoed that call. And yet, bucking that consensus were nine "no" votes โ€“ chief among them, as is typical when it comes to resolutions calling for Israel to adhere to international law or human rights, was the United States. The US has provided unwavering support to Israel throughout its war on Gaza, even as Israel faces accusations of genocide at the International Court of Justice (ICJ) and its prime minister has an International Criminal Court (ICC) arrest warrant to his name.


UniTox: Leveraging LLMs to Curate a Unified Dataset of Drug-Induced Toxicity from FDA Labels

Neural Information Processing Systems

Drug-induced toxicity is one of the leading reasons new drugs fail clinical trials. Machine learning models that predict drug toxicity from molecular structure could help researchers prioritize less toxic drug candidates. However, current toxicity datasets are typically small and limited to a single organ system (e.g., cardio, renal, or liver). Creating these datasets often involved time-intensive expert curation by parsing drug labelling documents that can exceed 100 pages per drug. Here, we introduce UniTox, a unified dataset of 2,418 FDA-approved drugs with drug-induced toxicity summaries and ratings created by using GPT-4o to process FDA drug labels.


Maher asks 'why do we want to bring back manufacturing' as Trump makes jobs argument in tariff war

FOX News

White House press secretary Karoline Leavitt discusses the U.S. inflation rate, peace talks between Russia and Ukraine and more on'America Reports.' "Real Time" host Bill Maher challenged one of President Trump's central arguments as he wages a tariff war among several countries. "I have one basic question: Why do we want to bring back manufacturing?" Maher asked his panel on Friday. "It's so 70s, you know? I mean, that ship has sailed. You know, there are countries that make jeans for 11. We're never going to be that country again."


SpaceX to send Starship to Mars next year, Elon Musk confirms

FOX News

DOGE leader Elon Musk opens up about his work in space on'Kudlow.' Elon Musk has confirmed that SpaceX's Starship will head to Mars at the end of 2026. The ship will be carrying Optimus, Tesla's humanoid robot. The tech billionaire said that if all goes well, humans could be on the red planet by 2029, although he admitted that 2031 is more likely. NASA's Perseverance Mars rover used its dual-camera Mastcam-Z imager to capture this image of "Santa Cruz," a hill within Jezero Crater, on April 29, 2021, the 68th Martian day, or sol, of the mission. The X account for Optimus replied to Musk's announcement with just two words: "Hold on."


Fox News AI Newsletter: 'Digital twin' danger

FOX News

A woman in Washington, D.C., views a manipulated video on January 24, 2019, that changes what is said by President Donald Trump and former President Barack Obama. This illustration photo taken on January 30, 2023 shows a phone screen displaying a statement from the head of security policy at META with a fake video of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their weapons shown in the background, in Washington, D.C. (OLIVIER DOULIERY/AFP via Getty Images) NEW REALITY: Artificial intelligence (AI) is producing hyperrealistic "digital twins" of politicians, celebrities, pornographic material, and more โ€“ leaving victims of deepfake technology struggling to determine legal recourse. NO BOUNDARY: Scarlett Johansson has taken a vocal stand on artificial intelligence, after having her likeness and voice used without permission. Last year, Johansson said she had been asked to voice OpenAI's Chatbot by CEO Sam Altman, but turned down the job, only for people to notice that the feature, named "Sky," sounded almost exactly like the actress. It was like: If that can happen to me, how are we going to protect ourselves from this? There's no boundary here; we're setting ourselves up to be taken advantage of," the 40-year-old told InStyle Magazine earlier this month.


Musk says first mission to Mars will launch next year

BBC News

SpaceX said it would review data "to better understand [the] root cause" of the most recent explosion and noted it happened after the loss of "several" engines. The Federal Aviation Administration (FAA) said the company would be required to conduct an investigation before it could fly again. Nasa hopes to use a modified version of the spaceship as a human lunar lander for its Artemis missions to return to the Moon. The tech billionaire has grand designs that the rocket system will one day take humans to the Moon, and then on to Mars, making humans "multi-planetary". He said that the first Mars mission would carry the Tesla humanoid robot "Optimus", which was shown to the public last year.


Under Trump, AI Scientists Are Told to Remove 'Ideological Bias' From Powerful Models

WIRED

The National Institute of Standards and Technology (NIST) has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of "AI safety," "responsible AI," and "AI fairness" in the skills it expects of members and introduces a request to prioritize "reducing ideological bias, to enable human flourishing and economic competitiveness." The information comes as part of an updated cooperative research and development agreement for AI Safety Institute consortium members, sent in early March. Previously, that agreement encouraged researchers to contribute technical work that could help identify and fix discriminatory model behavior related to gender, race, age, or wealth inequality. Such biases are hugely important because they can directly affect end users and disproportionately harm minorities and economically disadvantaged groups. The new agreement removes mention of developing tools "for authenticating content and tracking its provenance" as well as "labeling synthetic content," signaling less interest in tracking misinformation and deep fakes.