A Comparison of Large Language Model and Human Performance on Random Number Generation Tasks
–arXiv.org Artificial Intelligence
True randomness is for examining how humans generate sequences devoid of predictable incredibly hard to generate artificially [48], and most computergenerated patterns. By adapting an existing human RNGT for an random number generations (RNGs) employed in these LLM-compatible environment, this preliminary study tests whether tasks are actually pseudorandom rather than truly random [11, 25]. ChatGPT-3.5, a large language model (LLM) trained on humangenerated Pseudorandom numbers are generated using algorithms that can text, exhibits human-like cognitive biases when generating produce long sequences of apparently random results, which are random number sequences. Initial findings indicate that entirely determined by an initial value known as a seed. While these ChatGPT-3.5 more effectively avoids repetitive and sequential patterns pseudorandom numbers appear unpredictable and successfully pass compared to humans, with notably lower repeat frequencies many statistical tests for randomness, they are not genuinely random and adjacent number frequencies. Continued research into different because their generation is algorithmically determined and models, parameters, and prompting methodologies will deepen our can theoretically be reproduced if the seed value is known [11, 25].
arXiv.org Artificial Intelligence
Aug-19-2024
- Country:
- Europe > Spain
- Catalonia > Barcelona Province > Barcelona (0.05)
- North America > United States
- Delaware > Kent County
- Dover (0.04)
- New York (0.04)
- Delaware > Kent County
- Europe > Spain
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (0.94)
- Technology: