ChatGPT generates fake data set to support scientific hypothesis
The artificial-intelligence model that powers ChatGPT can create superficially plausible scientific data sets.Credit: Mateusz Slodkowski/SOPA Images/LightRocket via Getty Researchers have used the technology behind the artificial intelligence (AI) chatbot ChatGPT to create a fake clinical-trial data set to support an unverified scientific claim. In a paper published in JAMA Ophthalmology on 9 November1, the authors used GPT-4 -- the latest version of the large language model on which ChatGPT runs -- paired with Advanced Data Analysis (ADA), a model that incorporates the programming language Python and can perform statistical analysis and create data visualizations. The AI-generated data compared the outcomes of two surgical procedures and indicated -- wrongly -- that one treatment is better than the other. "Our aim was to highlight that, in a few minutes, you can create a data set that is not supported by real original data, and it is also opposite or in the other direction compared to the evidence that are available," says study co-author Giuseppe Giannaccare, an eye surgeon at the University of Cagliari in Italy. The ability of AI to fabricate convincing data adds to concern among researchers and journal editors about research integrity.
Nov-22-2023
- AI-Alerts:
- 2023 > 2023-11 > AAAI AI-Alert for Nov 28, 2023 (1.00)
- Country:
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (0.90)
- Research Report
- Industry:
- Health & Medicine
- Pharmaceuticals & Biotechnology (0.57)
- Surgery (0.55)
- Therapeutic Area > Ophthalmology/Optometry (0.89)
- Health & Medicine
- Technology: