Fast-DetectGPT: Efficient Zero-Shot Detection of Machine-Generated Text via Conditional Probability Curvature
Bao, Guangsheng, Zhao, Yanbin, Teng, Zhiyang, Yang, Linyi, Zhang, Yue
–arXiv.org Artificial Intelligence
Table 4: Details of the source models that is used to produce machine-generated text. We assess the performance of our methodologies using text generations sourced from various models, as outlined in Table 4. These models are arranged in order of their parameter count, with those having fewer than 20 billion parameters being run locally on a Tesla A100 GPU (80G). For models with over 6 billion parameters, we employ half-precision (float16), otherwise, we use full-precision (float32). In the case of larger models like GPT-3, ChatGPT, and GPT-4, we utilize the OpenAI API for the evaluations. Additionally, we provide information about the training corpus associated with each model, which we believe is pertinent for understanding the detection accuracy of different sampling and scoring models when applied to text generations originating from diverse source models, domains, and languages.
arXiv.org Artificial Intelligence
Oct-8-2023
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Energy > Oil & Gas
- Upstream (0.41)
- Information Technology > Security & Privacy (0.46)
- Energy > Oil & Gas
- Technology: