Lisa: Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning Attack

Neural Information Processing Systems 

Fine-tuning services for Large Language Models (LLMs) have emerged as a new paradigm.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found