random number
Masked Autoregressive Flow for Density Estimation
Autoregressive models are among the best performing neural density estimators. We describe an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data. By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow. This type of flow is closely related to Inverse Autoregressive Flow and is a generalization of Real NVP. Masked Autoregressive Flow achieves state-of-the-art performance in a range of general-purpose density estimation tasks.
- Europe > Italy (0.04)
- North America > United States > Texas > Harris County > Houston (0.04)
- North America > United States > New York > Kings County > New York City (0.04)
- (5 more...)
start with common concerns and then respond to individual reviewer comments as space permits: 2 Common: There should be a baseline using MCTS and assuming access to simulator / common random numbers
Thank you for the thoughtful and careful reviews. We hope the AC nominates some of you for reviewer awards. There should be a baseline using MCTS and assuming access to simulator / common random numbers. There appears to be some imprecision in reviews about what this means. Then environment stochasticity is re-sampled and the algorithm repeats.
Evaluating the Quality of Randomness and Entropy in Tasks Supported by Large Language Models
Karanjai, Rabimba, Lu, Yang, Chodavarapu, Ranjith, Xu, Lei, Shi, Weidong
The rapid advancement of large language model (LLM) technology has led to diverse applications, many of which inherently require randomness, such as stochastic decision-making, gaming, scheduling, AI agents, and cryptography-related tasks. However, the capabilities of LLMs in handling randomness, particularly in generating and utilizing random numbers effectively, remain unclear. This paper investigates the capacity of LLMs for handling tasks that involve randomness through a series of experiments. We designed a set of experiments that consider various factors that can influence an LLM's performance in tasks involving randomness, such as accessibility to external tools, types of tasks, model states (fresh vs. non-fresh), and prompting strategies. The experiments cover a range of tasks, including generating random numbers, generating random strings such as passwords, shuffling items, and evaluating the quality of randomness using entropy and the NIST randomness test-suite. Our findings reveal that while LLMs can generate outputs that exhibit some degree of randomness, their performance is inconsistent and often deviates significantly from the expected behavior. The analysis of the experimental results highlights key limitations and areas where improvement is needed for the LLMs to effectively handle tasks involving randomness
- North America > United States > Texas > Harris County > Houston (0.04)
- North America > United States > Maryland > Montgomery County > Gaithersburg (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (0.93)
Stochastic Streets: A Walk Through Random LLM Address Generation in four European Cities
Fu, Tairan, Campo-Nazareno, David, Coronado-Blázquez, Javier, Conde, Javier, Reviriego, Pedro, Lombardi, Fabrizio
Northeastern University, Boston, US A Abstract: Large Language Models (LLMs) are capable of solving complex math problems or answer difficult questions on almost any topic, but can they generate random street addresses for European cities? Large Language Models (LLMs) have shown impressive performance across a wide range of task s, such as answering questions on virtually any topic. However, there remain areas in wh ich their performance falls short, for example, seemingly simple tasks like counting the letters in a word. In this column, we explore another such challenge: generatin g random street addresses for four major European cities. Our results reveal that LLMs exhibit strong biases, repeatedly selecting a limited set of streets and, for some models, even specific street numbers. Surprisingly, so me of the more prominent and ico nic streets are not selected by the models and the most frequent numbers in the responses lack any clear significance.
start with common concerns and then respond to individual reviewer comments as space permits: 2 Common: There should be a baseline using MCTS and assuming access to simulator / common random numbers
Thank you for the thoughtful and careful reviews. We hope the AC nominates some of you for reviewer awards. There should be a baseline using MCTS and assuming access to simulator / common random numbers. There appears to be some imprecision in reviews about what this means. Then environment stochasticity is re-sampled and the algorithm repeats.
Stable Acoustic Relay Assignment with High Throughput via Lase Chaos-based Reinforcement Learning
Chen, Zengjing, Wang, Lu, Xing, Chengzhi
Underwater Acoustic Networks (UANs) have gained significant attention from both industry and academia due to their indisputable advantages in improving link reliability, increasing system capacity, expanding transmission range and so on. Acoustic communication is most widely used underwater communication as sound wave is not absorbed by water so easily like electromagnetic wave and optical wave [1]. UANs typically consist of acoustic-linked seabed sensors, autonomous underwater vehicles, and ground stations that provide links to onshore control centers. Due to the battery-powered network nodes, shallow water acoustic channel characteristics, such as low available bandwidth and highly varying multi-path, maximizing throughput while minimizing consumption has become a very challenging task [2]. Recent studies have discussed the challenges and opportunities of underwater cognitive communication [3], proposed cooperative automatic repeat request protocols for higher channel quality [4], and analyzed the impact of low transmission rates and long preambles on medium access control protocols [5]. Artificial intelligence (AI) has experienced significant growth in popularity in recent years, and many industries and research fields have explored its potential applications, including information theory, game theory, biological systems, and so on [6-9].
- Energy (0.48)
- Information Technology > Security & Privacy (0.34)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
Here's how to generate a truly random number with quantum physics
Breakthroughs, discoveries, and DIY tips sent every weekday. Very little in this life is truly random. A coin flip is influenced by the flipper's force, its surrounding airflow, and gravity. Similar variables dictate rolling a pair of dice or shuffling a deck of cards, while even classical computing's cryptographic algorithms are theoretically susceptible to outside influence or bias. "True randomness is something that nothing in the universe can predict in advance," explained Krister Shalm, a physicist at the National Institute of Standards and Technology (NIST).
Detecting Musical Deepfakes
Ab s tract -- The proliferation of Text - to - Music (TTM) platforms has democratized music creation, letting users effortlessly generat e high - quality compositions . However, this innovation has also introduced challenges to musicians and the music in dustry . T his research focuses on utilizing the FakeMusicCaps dataset to address the challenge of detecting AI - generated songs by classifying the audio as deepfake or human. To simulate a real - world adversarial entity tempo stretching and pitch shifting modifications were applied to the dataset . Mel Spectrograms were generated from the resulting datasets, w hich were then used to train and test a convolutional neural network. This paper also explores the ethical and societal implications of TTM platforms, suggesting that detection systems developed and employed with care are a necessary tool to safeguard musicians and foster the positive potential of TTM plat forms and gen erative AI in music . Rapid a dvances in g e nerative AI have caused the creat ive landscape to be u pended, enabling almost anyone to easily create music that can be hard to distinguish from human - ma de compositions . AI - generated music is part of a wider classification of AI - generated media and art that falls unde r the category of " deepfake " .
- North America > United States > Texas > Travis County > Austin (0.40)
- North America > United States > California (0.04)
- Europe > Italy (0.04)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
Deterministic or probabilistic? The psychology of LLMs as random number generators
Large Language Models (LLMs) have transformed text generation through inherently probabilistic context-aware mechanisms, mimicking human natural language. In this paper, we systematically investigate the performance of various LLMs when generating random numbers, considering diverse configurations such as different model architectures, numerical ranges, temperature, and prompt languages. Our results reveal that, despite their stochastic transformers-based architecture, these models often exhibit deterministic responses when prompted for random numerical outputs. In particular, we find significant differences when changing the model, as well as the prompt language, attributing this phenomenon to biases deeply embedded within the training data. Models such as DeepSeek-R1 can shed some light on the internal reasoning process of LLMs, despite arriving to similar results. These biases induce predictable patterns that undermine genuine randomness, as LLMs are nothing but reproducing our own human cognitive biases.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Spain > Galicia > Madrid (0.04)