Goto

Collaborating Authors

 rarity


A long lost silver dollar may be worth 5 million

Popular Science

The'King of American Coins' remained hidden in a late collector's archive for decades. Breakthroughs, discoveries, and DIY tips sent every weekday. One of the country's rarest coins is rarer than even expert coin collectors believed. After the surprise discovery of a long-lost 1804 dollar (aka the " King of American Coins "), the rarity's total known count now stands at 16. Regardless of its ranking, the silver coin is expected to fetch significantly more than its original worth when it hits the auction block on December 9. According to auctioneers at Stack's Bowers Galleries, the story begins with former President Andrew Jackson.




DIT: Dimension Reduction View on Optimal NFT Rarity Meters

Belousov, Dmitry, Yanovich, Yury

arXiv.org Artificial Intelligence

Non-fungible tokens (NFTs) have become a significant digital asset class, each uniquely representing virtual entities such as artworks. These tokens are stored in collections within smart contracts and are actively traded across platforms on Ethereum, Bitcoin, and Solana blockchains. The value of NFTs is closely tied to their distinctive characteristics that define rarity, leading to a growing interest in quantifying rarity within both industry and academia. While there are existing rarity meters for assessing NFT rarity, comparing them can be challenging without direct access to the underlying collection data. The Rating over all Rarities (ROAR) benchmark addresses this challenge by providing a standardized framework for evaluating NFT rarity. This paper explores a dimension reduction approach to rarity design, introducing new performance measures and meters, and evaluates them using the ROAR benchmark. Our contributions to the rarity meter design issue include developing an optimal rarity meter design using non-metric weighted multidimensional scaling, introducing Dissimilarity in Trades (DIT) as a performance measure inspired by dimension reduction techniques, and unveiling the non-interpretable rarity meter DIT, which demonstrates superior performance compared to existing methods.




The Rarity of Musical Audio Signals Within the Space of Possible Audio Generation

Collins, Nick

arXiv.org Artificial Intelligence

A white noise signal can access any possible configuration of values, though statistically over many samples tends to a uniform spectral distribution, and is highly unlikely to produce intelligible sound. But how unlikely? The probability that white noise generates a music-like signal over different durations is analyzed, based on some necessary features observed in real music audio signals such as mostly proximate movement and zero crossing rate. Given the mathematical results, the rarity of music as a signal is considered overall. The applicability of this study is not just to show that music has a precious rarity value, but that examination of the size of music relative to the overall size of audio signal space provides information to inform new generations of algorithmic music system (which are now often founded on audio signal generation directly, and may relate to white noise via such machine learning processes as diffusion). Estimated upper bounds on the rarity of music to the size of various physical and musical spaces are compared, to better understand the magnitude of the results (pun intended). Underlying the research are the questions `how much music is still out there?' and `how much music could a machine learning process actually reach?'.


Accurately Predicting Probabilities of Safety-Critical Rare Events for Intelligent Systems

Bai, Ruoxuan, Yang, Jingxuan, Gong, Weiduo, Zhang, Yi, Lu, Qiujing, Feng, Shuo

arXiv.org Artificial Intelligence

Intelligent systems are increasingly integral to our daily lives, yet rare safety-critical events present significant latent threats to their practical deployment. Addressing this challenge hinges on accurately predicting the probability of safety-critical events occurring within a given time step from the current state, a metric we define as 'criticality'. The complexity of predicting criticality arises from the extreme data imbalance caused by rare events in high dimensional variables associated with the rare events, a challenge we refer to as the curse of rarity. Existing methods tend to be either overly conservative or prone to overlooking safety-critical events, thus struggling to achieve both high precision and recall rates, which severely limits their applicability. This study endeavors to develop a criticality prediction model that excels in both precision and recall rates for evaluating the criticality of safety-critical autonomous systems. We propose a multi-stage learning framework designed to progressively densify the dataset, mitigating the curse of rarity across stages. To validate our approach, we evaluate it in two cases: lunar lander and bipedal walker scenarios. The results demonstrate that our method surpasses traditional approaches, providing a more accurate and dependable assessment of criticality in intelligent systems.


A comprehensive survey on rare event prediction

AIHub

Rare events are infrequent incidents characterized by scarcity, often presenting computational challenges in data analysis. These events don't happen often, as the name suggests, but they significantly impact when they do. For example, in pulp-and-paper manufacturing, paper breakage that occurs 1%, can cost $10,000/hour. Predicting such elusive occurrences is important in cost management, operational efficiency, and energy conservation. In fact, these rare events are hidden pieces that, when discovered and understood, can lead to better decision-making and more efficient models.


Evaluation of Rarity of Fingerprints in Forensics

Neural Information Processing Systems

A method for computing the rarity of latent fingerprints represented by minutiae is given. It allows determining the probability of finding a match for an evidence print in a database of n known prints. The probability of random correspondence between evidence and database is determined in three procedural steps. In the registration step the latent print is aligned by finding its core point; which is done using a procedure based on a machine learning approach based on Gaussian processes. In the evidence probability evaluation step a generative model based on Bayesian networks is used to determine the probability of the evidence; it takes into account both the dependency of each minutia on nearby minutiae and the confidence of their presence in the evidence.