A Comparison of Large Language Model and Human Performance on Random Number Generation Tasks

Harrison, Rachel M.

arXiv.org Artificial Intelligence 

True randomness is for examining how humans generate sequences devoid of predictable incredibly hard to generate artificially [48], and most computergenerated patterns. By adapting an existing human RNGT for an random number generations (RNGs) employed in these LLM-compatible environment, this preliminary study tests whether tasks are actually pseudorandom rather than truly random [11, 25]. ChatGPT-3.5, a large language model (LLM) trained on humangenerated Pseudorandom numbers are generated using algorithms that can text, exhibits human-like cognitive biases when generating produce long sequences of apparently random results, which are random number sequences. Initial findings indicate that entirely determined by an initial value known as a seed. While these ChatGPT-3.5 more effectively avoids repetitive and sequential patterns pseudorandom numbers appear unpredictable and successfully pass compared to humans, with notably lower repeat frequencies many statistical tests for randomness, they are not genuinely random and adjacent number frequencies. Continued research into different because their generation is algorithmically determined and models, parameters, and prompting methodologies will deepen our can theoretically be reproduced if the seed value is known [11, 25].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found