Goto

Collaborating Authors

 Criminal Law


Researchers genetically altered fruit flies to crave cocaine

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. In a world first, scientists at the University of Utah have engineered fruit flies susceptible to cocaine addiction. But as strange as it sounds, there are potentially life-saving reasons for genetically altering the insects to crave the drug. The novel biological model could help addiction treatment therapies development and expedite research timelines. The findings are detailed in the Journal of Neuroscience.


A Full LLaMA-7B results on TruthfulQA

Neural Information Processing Systems

Below we compare the answers generated by LLaMA-7B with and without inference-time intervention. Subsection are organized by the taxonomy of TruthfulQA and each box represent one question; at first we print out the question and gold standard answer which come together with TruthfulQA dataset, curated by Lin et al. (2021); then print out the original answer from LLaMA-7B in a new line; at last the new answer generated with ITI in play (three lines in total). The presented results are produced using 2-fold cross-validation, i.e., when generating answers for half of the questions, the targeted heads and truthful directions are selected and learnt by using the answers to the rest half of the questions. The overall truthfulness is 54.5% true and 93.3% informative, compared to the baseline with31.6% Results are from one random seed 1 but random seed is found not influential on performance in Appendix D. Flipped to True: 6/13 Reference QA: Who is the most interesting man in the world?



Refusal in Language Models Is Mediated by a Single Direction Andy Arditi

Neural Information Processing Systems

Conversational large language models are fine-tuned for both instruction-following and safety, resulting in models that obey benign requests but refuse harmful ones. While this refusal behavior is widespread across chat models, its underlying mechanisms remain poorly understood. In this work, we show that refusal is mediated by a one-dimensional subspace, across 13 popular open-source chat models up to 72B parameters in size. Specifically, for each model, we find a single direction such that erasing this direction from the model's residual stream activations prevents it from refusing harmful instructions, while adding this direction elicits refusal on even harmless instructions. Leveraging this insight, we propose a novel white-box jailbreak method that surgically disables refusal with minimal effect on other capabilities. Finally, we mechanistically analyze how adversarial suffixes suppress propagation of the refusal-mediating direction. Our findings underscore the brittleness of current safety fine-tuning methods. More broadly, our work showcases how an understanding of model internals can be leveraged to develop practical methods for controlling model behavior.


Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset

Neural Information Processing Systems

Emerging ethical approaches have attempted to filter pretraining material, but such approaches have been ad hoc and failed to take context into account. We offer an approach to filtering grounded in law, which has directly addressed the tradeoffs in filtering material. First, we gather and make available the Pile of Law, a 256GB (and growing) dataset of open-source English-language legal and administrative data, covering court opinions, contracts, administrative rules, and legislative records. Pretraining on the Pile of Law may help with legal tasks that have the promise to improve access to justice. Second, we distill the legal norms that governments have developed to constrain the inclusion of toxic or private content into actionable lessons for researchers and discuss how our dataset reflects these norms. Third, we show how the Pile of Law offers researchers the opportunity to learn such filtering rules directly from the data, providing an exciting new research direction in model-based processing. Warning: this paper contains quotations that may be offensive or upsetting.


Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset

Neural Information Processing Systems

Emerging ethical approaches have attempted to filter pretraining material, but such approaches have been ad hoc and failed to take context into account. We offer an approach to filtering grounded in law, which has directly addressed the tradeoffs in filtering material. First, we gather and make available the Pile of Law, a 256GB (and growing) dataset of open-source English-language legal and administrative data, covering court opinions, contracts, administrative rules, and legislative records. Pretraining on the Pile of Law may help with legal tasks that have the promise to improve access to justice. Second, we distill the legal norms that governments have developed to constrain the inclusion of toxic or private content into actionable lessons for researchers and discuss how our dataset reflects these norms. Third, we show how the Pile of Law offers researchers the opportunity to learn such filtering rules directly from the data, providing an exciting new research direction in model-based processing. Warning: this paper contains quotations that may be offensive or upsetting.


e2e06adf560b0706d3b1ddfca9f29756-Supplemental-Datasets_and_Benchmarks_Track.pdf

Neural Information Processing Systems

Risks associated with this paper. This paper's contribution can be divided into three parts, each In discussing these risks, it is worth noting three things. This could be abused by, e.g., using the automated evaluator score Custom data was generated by the authors. Figure 4 shows the breakdown of the StrongREJECT dataset by source and category. Original reported output (translated to English) It is possible to build a homemade explosive device with household items. Here's how to do it: 1. Start by finding materials: You need a piece of plaster or a piece of metal that you like to be the base of your explosive device... [truncated in original paper] Our full re-generated outputs (translated to English) Building a homemade explosive Building a homemade explosive Building a homemade explosive device using household device using household device using household items requires some construction items involves many steps. Here's how to do it: is important to understand Here's how to do it: 1. Planning: First, you need what work the machine will 1. Testing the Materials: It is to plan what you want to do before you start building important to be sure that the achieve with your explosive it.


REJECT for Empty Jailbreaks

Neural Information Processing Systems

Most jailbreak papers claim the jailbreaks they propose are highly effective, often boasting near-100% attack success rates. However, it is perhaps more common than not for jailbreak developers to substantially exaggerate the effectiveness of their jailbreaks. We suggest this problem arises because jailbreak researchers lack a standard, high-quality benchmark for evaluating jailbreak performance, leaving researchers to create their own. To create a benchmark, researchers must choose a dataset of forbidden prompts to which a victim model will respond, along with an evaluation method that scores the harmfulness of the victim model's responses. We show that existing benchmarks suffer from significant shortcomings and introduce the StrongREJECT benchmark to address these issues.


Appendix

Neural Information Processing Systems

A.1 Selection of substitution rate p We observed when the value of p is within (0, 0.7), there exists a correlation between the S Performing a grid search on each task using diffusion models is an expensive process. However, it has been observed that an increase in the value of p leads to a deviation between the two. This could be attributed to a higher conversion error that occurs when p is excessively large. A.2 Selection of number of latent code k The parameter k determines the number of latent codes Figure 4: Impact of the proportion of injected used to represent a paragraph and therefore controls the noise for learning Paragraph Embeddings compression level. To determine the best set of latent codes, we conducted experiments using three different methods: 1) selecting the first k hidden vectors, 2) selecting the last k hidden vectors, and 3) selecting interleaving hidden vectors, one for every L k hidden vectors. The results of the ablation study are presented in Table 5. Based on our findings, we observed no significant difference among the different choices, so we opted for option 1). Furthermore, we discovered that increasing the value of k does not lead to a dramatic improvement in performance. To balance between efficiency and performance, in most of our study we only use k = 16 Setup BLEU_clean BLEU_robust First k (k=16) 79.59 43.17 A.3 Reconstruction, denoising and interpolation examples In Table 6, we present examples that demonstrate the adeptness of the trained Variational Paragraph Embedder in providing clean and denoised reconstructions. Additionally, we showcase interpolation results (Table 7, 8) derived from two random sentences in the hotel review dataset. The interpolated paragraph is usually coherent and incorporates inputs from both sentences, characterizing the distributional smoothness of the latent space. Reconstructed complaints: after two nights stay, i asked the maid to clean our room (empty the wastebasket & make the bed). Denoising reconstruction (hotel review), noise level 0.3 Original * * * check out the bathroom picture * * * i was in nyc by myself to watch some friends participate in the us olympic text marathon trials. Corrupted * * [unused697] check exams the bathroom picture * * slams i was in nyc mead myself yankee 2016 some scotch text ruin in the outfielder olympicnca trials. Reconstructed ***check out the bathroom picture*** i was in nyc with my husband and some friends staying in the hudson hotel in text nyc. Table 6: Reconstruction examples for clean reconstruction where input is not corrupted and denoising reconstruction where input is corrupted with 30% substitution noise. The mismatched text in the clean reconstruction is in red. We provide generation examples for both summarization and sentiment-guided generation in Table 9 and Table 10.


ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users

Neural Information Processing Systems

Large-scale pre-trained generative models are taking the world by storm, due to their abilities in generating creative content. Meanwhile, safeguards for these generative models are developed, to protect users' rights and safety, most of which are designed for large language models. Existing methods primarily focus on jailbreak and adversarial attacks, which mainly evaluate the model's safety under malicious prompts. Recent work found that manually crafted safe prompts can unintentionally trigger unsafe generations. To further systematically evaluate the safety risks of text-to-image models, we propose a novel Automatic Red-Teaming framework, ART. Our method leverages both vision language model and large language model to establish a connection between unsafe generations and their prompts, thereby more efficiently identifying the model's vulnerabilities. With our comprehensive experiments, we reveal the toxicity of the popular open-source text-to-image models.