distance distribution
KFNN: K-Free Nearest Neighbor For Crowdsourcing
To reduce annotation costs, it is common in crowdsourcing to collect only a few noisy labels from different crowd workers for each instance. However, the limited noisy labels restrict the performance of label integration algorithms in inferring the unknown true label for the instance. Recent works have shown that leveraging neighbor instances can help alleviate this problem. Yet, these works all assume that each instance has the same neighborhood size, which defies common sense. To address this gap, we propose a novel label integration algorithm called K-free nearest neighbor (KFNN). In KFNN, the neighborhood size of each instance is automatically determined based on its attributes and noisy labels.
How Well Do LLMs Imitate Human Writing Style?
Large language models (LLMs) can generate fluent text, but their ability to replicate the distinctive style of a specific human author remains unclear. We present a fast, training-free framework for authorship verification and style imitation analysis. The method integrates TF-IDF character n-grams with transformer embeddings and classifies text pairs through empirical distance distributions, eliminating the need for supervised training or threshold tuning. It achieves 97.5\% accuracy on academic essays and 94.5\% in cross-domain evaluation, while reducing training time by 91.8\% and memory usage by 59\% relative to parameter-based baselines. Using this framework, we evaluate five LLMs from three separate families (Llama, Qwen, Mixtral) across four prompting strategies - zero-shot, one-shot, few-shot, and text completion. Results show that the prompting strategy has a more substantial influence on style fidelity than model size: few-shot prompting yields up to 23.5x higher style-matching accuracy than zero-shot, and completion prompting reaches 99.9\% agreement with the original author's style. Crucially, high-fidelity imitation does not imply human-like unpredictability - human essays average a perplexity of 29.5, whereas matched LLM outputs average only 15.2. These findings demonstrate that stylistic fidelity and statistical detectability are separable, establishing a reproducible basis for future work in authorship modeling, detection, and identity-conditioned generation.
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Switzerland (0.04)
- Asia > Indonesia > Bali (0.04)
Local Cluster Cardinality Estimation for Adaptive Mean Shift
This article presents an adaptive mean shift algorithm designed for datasets with varying local scale and cluster cardinality. Local distance distributions, from a point to all others, are used to estimate the cardinality of the local cluster by identifying a local minimum in the density of the distance distribution. Based on these cardinality estimates, local cluster parameters are then computed for the entire cluster in contrast to KDE-based methods, which provide insight only into localized regions of the cluster. During the mean shift execution, the cluster cardinality estimate is used to adaptively adjust the bandwidth and the mean shift kernel radius threshold. Our algorithm outperformed a recently proposed adaptive mean shift method on its original dataset and demonstrated competitive performance on a broader clustering benchmark.
- Oceania > Australia (0.04)
- North America > United States > New Jersey > Hudson County > Hoboken (0.04)
- North America > United States > Florida > Palm Beach County > Boca Raton (0.04)
- North America > United States > California > Monterey County > Pacific Grove (0.04)
Concept-based Adversarial Attack: a Probabilistic Perspective
Zhang, Andi, Ding, Xuan, McDonagh, Steven, Kaski, Samuel
We propose a concept-based adversarial attack framework that extends beyond single-image perturbations by adopting a probabilistic perspective. Rather than modifying a single image, our method operates on an entire concept -- represented by a probabilistic generative model or a set of images -- to generate diverse adversarial examples. Preserving the concept is essential, as it ensures that the resulting adversarial images remain identifiable as instances of the original underlying category or identity. By sampling from this concept-based adversarial distribution, we generate images that maintain the original concept but vary in pose, viewpoint, or background, thereby misleading the classifier. Mathematically, this framework remains consistent with traditional adversarial attacks in a principled manner. Our theoretical and empirical results demonstrate that concept-based adversarial attacks yield more diverse adversarial examples and effectively preserve the underlying concept, while achieving higher attack efficiency.
- North America > Canada > British Columbia > Vancouver (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (9 more...)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
KFNN: K-Free Nearest Neighbor For Crowdsourcing
To reduce annotation costs, it is common in crowdsourcing to collect only a few noisy labels from different crowd workers for each instance. However, the limited noisy labels restrict the performance of label integration algorithms in inferring the unknown true label for the instance. Recent works have shown that leveraging neighbor instances can help alleviate this problem. Yet, these works all assume that each instance has the same neighborhood size, which defies common sense. To address this gap, we propose a novel label integration algorithm called K-free nearest neighbor (KFNN). In KFNN, the neighborhood size of each instance is automatically determined based on its attributes and noisy labels.
Exploring the Meaningfulness of Nearest Neighbor Search in High-Dimensional Space
Chen, Zhonghan, Zhang, Ruiyuan, Zhao, Xi, Cheng, Xiaojun, Zhou, Xiaofang
Dense high dimensional vectors are becoming increasingly vital in fields such as computer vision, machine learning, and large language models (LLMs), serving as standard representations for multimodal data. Now the dimensionality of these vector can exceed several thousands easily. Despite the nearest neighbor search (NNS) over these dense high dimensional vectors have been widely used for retrieval augmented generation (RAG) and many other applications, the effectiveness of NNS in such a high-dimensional space remains uncertain, given the possible challenge caused by the "curse of dimensionality." To address above question, in this paper, we conduct extensive NNS studies with different distance functions, such as $L_1$ distance, $L_2$ distance and angular-distance, across diverse embedding datasets, of varied types, dimensionality and modality. Our aim is to investigate factors influencing the meaningfulness of NNS. Our experiments reveal that high-dimensional text embeddings exhibit increased resilience as dimensionality rises to higher levels when compared to random vectors. This resilience suggests that text embeddings are less affected to the "curse of dimensionality," resulting in more meaningful NNS outcomes for practical use. Additionally, the choice of distance function has minimal impact on the relevance of NNS. Our study shows the effectiveness of the embedding-based data representation method and can offer opportunity for further optimization of dense vector-related applications.
- Asia > China > Hong Kong (0.05)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.66)
LDReg: Local Dimensionality Regularized Self-Supervised Learning
Huang, Hanxun, Campello, Ricardo J. G. B., Erfani, Sarah Monazam, Ma, Xingjun, Houle, Michael E., Bailey, James
Representations learned via self-supervised learning (SSL) can be susceptible to dimensional collapse, where the learned representation subspace is of extremely low dimensionality and thus fails to represent the full data distribution and modalities. Dimensional collapse --- also known as the "underfilling" phenomenon --- is one of the major causes of degraded performance on downstream tasks. Previous work has investigated the dimensional collapse problem of SSL at a global level. In this paper, we demonstrate that representations can span over high dimensional space globally, but collapse locally. To address this, we propose a method called local dimensionality regularization (LDReg). Our formulation is based on the derivation of the Fisher-Rao metric to compare and optimize local distance distributions at an asymptotically small radius for each data point. By increasing the local intrinsic dimensionality, we demonstrate through a range of experiments that LDReg improves the representation quality of SSL. The results also show that LDReg can regularize dimensionality at both local and global levels. SSL focuses on the construction of effective representations without reliance on labels. Quality measures for such representations are crucial to assess and regularize the learning process. A key aspect of representation quality is to avoid dimensional collapse and its more severe form, mode collapse, where the representation converges to a trivial vector (Jing et al., 2022). Dimensional collapse refers to the phenomenon whereby many of the features are highly correlated and thus span only a lower-dimensional subspace. Existing works have connected dimensional collapse with low quality of learned representations (He & Ozay, 2022; Li et al., 2022; Garrido et al., 2023a; Dubois et al., 2022). Both contrastive and non-contrastive learning can be susceptible to dimensional collapse (Tian et al., 2021; Jing et al., 2022; Zhang et al., 2022), which can be mitigated by regularizing dimensionality as a global property, such as learning decorrelated features (Hua et al., 2021) or minimizing the off-diagonal terms of the covariance matrix (Zbontar et al., 2021; Bardes et al., 2022).
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > New Jersey (0.04)
- Europe > Denmark > Southern Denmark (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
Beyond Labels: Advancing Cluster Analysis with the Entropy of Distance Distribution (EDD)
Metzner, Claus, Schilling, Achim, Krauss, Patrick
In the evolving landscape of data science, the accurate quantification of clustering in high-dimensional data sets remains a significant challenge, especially in the absence of predefined labels. This paper introduces a novel approach, the Entropy of Distance Distribution (EDD), which represents a paradigm shift in label-free clustering analysis. Traditional methods, reliant on discrete labels, often struggle to discern intricate cluster patterns in unlabeled data. EDD, however, leverages the characteristic differences in pairwise point-to-point distances to discern clustering tendencies, independent of data labeling. Our method employs the Shannon information entropy to quantify the 'peakedness' or 'flatness' of distance distributions in a data set. This entropy measure, normalized against its maximum value, effectively distinguishes between strongly clustered data (indicated by pronounced peaks in distance distribution) and more homogeneous, non-clustered data sets. This label-free quantification is resilient against global translations and permutations of data points, and with an additional dimension-wise z-scoring, it becomes invariant to data set scaling. We demonstrate the efficacy of EDD through a series of experiments involving two-dimensional data spaces with Gaussian cluster centers. Our findings reveal a monotonic increase in the EDD value with the widening of cluster widths, moving from well-separated to overlapping clusters. This behavior underscores the method's sensitivity and accuracy in detecting varying degrees of clustering. EDD's potential extends beyond conventional clustering analysis, offering a robust, scalable tool for unraveling complex data structures without reliance on pre-assigned labels.
SoK: Comparing Different Membership Inference Attacks with a Comprehensive Benchmark
Niu, Jun, Zhu, Xiaoyan, Zeng, Moxuan, Zhang, Ge, Zhao, Qingyang, Huang, Chunhui, Zhang, Yangming, An, Suyu, Wang, Yangzhong, Yue, Xinghui, He, Zhipeng, Guo, Weihao, Shen, Kuo, Liu, Peng, Shen, Yulong, Jiang, Xiaohong, Ma, Jianfeng, Zhang, Yuqing
Membership inference (MI) attacks threaten user privacy through determining if a given data example has been used to train a target model. However, it has been increasingly recognized that the "comparing different MI attacks" methodology used in the existing works has serious limitations. Due to these limitations, we found (through the experiments in this work) that some comparison results reported in the literature are quite misleading. In this paper, we seek to develop a comprehensive benchmark for comparing different MI attacks, called MIBench, which consists not only the evaluation metrics, but also the evaluation scenarios. And we design the evaluation scenarios from four perspectives: the distance distribution of data samples in the target dataset, the distance between data samples of the target dataset, the differential distance between two datasets (i.e., the target dataset and a generated dataset with only nonmembers), and the ratio of the samples that are made no inferences by an MI attack. The evaluation metrics consist of ten typical evaluation metrics. We have identified three principles for the proposed "comparing different MI attacks" methodology, and we have designed and implemented the MIBench benchmark with 84 evaluation scenarios for each dataset. In total, we have used our benchmark to fairly and systematically compare 15 state-of-the-art MI attack algorithms across 588 evaluation scenarios, and these evaluation scenarios cover 7 widely used datasets and 7 representative types of models. All codes and evaluations of MIBench are publicly available at https://github.com/MIBench/MIBench.github.io/blob/main/README.md.
- North America > United States > Texas (0.04)
- North America > United States > Pennsylvania (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.92)
An Activity-Based Model of Transport Demand for Greater Melbourne
Both, Alan, Singh, Dhirendra, Jafari, Afshin, Giles-Corti, Billie, Gunn, Lucy
In this paper, we present an algorithm for creating a synthetic population for the Greater Melbourne area using a combination of machine learning, probabilistic, and gravity-based approaches. We combine these techniques in a hybrid model with three primary innovations: 1. when assigning activity patterns, we generate individual activity chains for every agent, tailored to their cohort; 2. when selecting destinations, we aim to strike a balance between the distance-decay of trip lengths and the activity-based attraction of destination locations; and 3. we take into account the number of trips remaining for an agent so as to ensure they do not select a destination that would be unreasonable to return home from. Our method is completely open and replicable, requiring only publicly available data to generate a synthetic population of agents compatible with commonly used agent-based modeling software such as MATSim. The synthetic population was found to be accurate in terms of distance distribution, mode choice, and destination choice for a variety of population sizes.
- Oceania > Australia > Victoria > Melbourne (0.14)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- (11 more...)
- Transportation (1.00)
- Government > Regional Government > Oceania Government > Australia Government (0.46)