DALD: Improving Logits-based Detector without Logits from Black-box LLMs Cong Zeng 1 Shengkun Tang 1 Xianjun Yang 2 Yuanzhou Chen 3
–Neural Information Processing Systems
The advent of Large Language Models (LLMs) has revolutionized text generation, producing outputs that closely mimic human writing. This blurring of lines between machine-and human-written text presents new challenges in distinguishing one from the other - a task further complicated by the frequent updates and closed nature of leading proprietary LLMs. Traditional logits-based detection methods leverage surrogate models for identifying LLM-generated content when the exact logits are unavailable from black-box LLMs. However, these methods grapple with the misalignment between the distributions of the surrogate and the often undisclosed target models, leading to performance degradation, particularly with the introduction of new, closed-source models. Furthermore, while current methodologies are generally effective when the source model is identified, they falter in scenarios where the model version remains unknown, or the test set comprises outputs from various source models.
Neural Information Processing Systems
May-25-2025, 02:42:45 GMT
- Country:
- North America > United States > California (0.28)
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Health & Medicine (0.93)
- Information Technology > Security & Privacy (0.46)
- Transportation > Air (0.62)
- Technology: