Goto

Collaborating Authors

 mind



Would YOU sit on it? Scientists develop a futuristic chair that puts you in an 'altered state of mind' within minutes

Daily Mail - Science & tech

Alex Pretti's Minneapolis death was murder, Americans declare in damning poll as voters issue new demand to Trump... and reveal how few back the shooting'Greedy pig' Harry Styles is shamefully exploiting obsessed women. I know... because it happened to me: LIZ JONES My sister confided an unbearable secret about her boyfriend. Keeping quiet is intolerable... our mother will be devastated: DEAR JANE Trump accounts: Million-dollar baby plan aims to create a fortune for America's newest arrivals before age 30 Nicki Minaj flashes dagger-long nails as she clutches Trump's hand after gushing she's his No. 1 fan Bryan Kohberger's warped requests from behind bars leave prison guards sickened... as new pictures of Idaho murders reveal full extent of his barbarity Bruce Willis' wife Emma makes heartbreaking admission about star's dementia battle Hilarious live gaffe on David Muir's World News Tonight that'triggered behind the scenes meltdown' Haley Kalil confident her bitter lawsuit with ex-NFL star husband will be thrown out as she cites'free speech' after revealing size of his manhood'He was Mr Perfect... now we're seeing his true colours': How Harry Styles cultivated his'good boy' image... and why fans are now turning on him after this controversial new move Mom who gave all four of her daughters the same name slams critics: 'Our family doesn't need outside approval' Brooklyn Beckham and Nicola Peltz share photo of the'world's most expensive wine' at £17,000 a BOTTLE... as it's revealed she gets a '$1m monthly allowance' from her billionaire father Would YOU sit on it? Scientists develop a futuristic chair that puts you in an'altered state of mind' within minutes READ MORE: People are using'binaural beats' to simulate the effects of drugs Would you be brave enough to sit on a chair that can send you into an'altered state of mind' within minutes? That is the wild promise of the Aiora chair, a futuristic seat designed by scientists from the University of Essex and British furniture company DavidHugh LTD.


Mind's Eye of LLMs: Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models

Neural Information Processing Systems

Large language models (LLMs) have exhibited impressive performance in language comprehension and various reasoning tasks. However, their abilities in spatial reasoning, a crucial aspect of human cognition, remain relatively unexplored. Human possess a remarkable ability to create mental images of unseen objects and actions through a process known as the Mind's Eye, enabling the imagination of the unseen world. Inspired by this cognitive capacity, we propose Visualization-of-Thought (VoT) prompting. VoT aims to elicit spatial reasoning of LLMs by visualizing their reasoning traces, thereby guiding subsequent reasoning steps.


Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension

Neural Information Processing Systems

The success of over-parameterized neural networks trained to near-zero training error has caused great interest in the phenomenon of benign overfitting, where estimators are statistically consistent even though they interpolate noisy training data. While benign overfitting in fixed dimension has been established for some learning methods, current literature suggests that for regression with typical kernel methods and wide neural networks, benign overfitting requires a high-dimensional setting, where the dimension grows with the sample size. In this paper, we show that the smoothness of the estimators, and not the dimension, is the key: benign overfitting is possible if and only if the estimator's derivatives are large enough. We generalize existing inconsistency results to non-interpolating models and more kernels to show that benign overfitting with moderate derivatives is impossible in fixed dimension. Conversely, we show that benign overfitting is possible for regression with a sequence of spiky-smooth kernels with large derivatives. Using neural tangent kernels, we translate our results to wide neural networks. We prove that while infinite-width networks do not overfit benignly with the ReLU activation, this can be fixed by adding small high-frequency fluctuations to the activation function. Our experiments verify that such neural networks, while overfitting, can indeed generalize well even on low-dimensional data sets.



Mind the Nuisance: Gaussian Process Classification using Privileged Noise

Neural Information Processing Systems

The learning with privileged information setting has recently attracted a lot of attention within the machine learning community, as it allows the integration of additional knowledge into the training process of a classifier, even when this comes in the form of a data modality that is not available at test time. Here, we show that privileged information can naturally be treated as noise in the latent function of a Gaussian process classifier (GPC). That is, in contrast to the standard GPC setting, the latent function is not just a nuisance but a feature: it becomes a natural measure of confidence about the training data by modulating the slope of the GPC probit likelihood function. Extensive experiments on public datasets show that the proposed GPC method using privileged noise, called GPC+, improves over a standard GPC without privileged knowledge, and also over the current state-of-the-art SVM-based method, SVM+. Moreover, we show that advanced neural networks and deep learning methods can be compressed as privileged information.


Mind's Eye of LLMs: Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models

Neural Information Processing Systems

Large language models (LLMs) have exhibited impressive performance in language comprehension and various reasoning tasks. However, their abilities in spatial reasoning, a crucial aspect of human cognition, remain relatively unexplored. Human possess a remarkable ability to create mental images of unseen objects and actions through a process known as the Mind's Eye, enabling the imagination of the unseen world. Inspired by this cognitive capacity, we propose Visualization-of-Thought (VoT) prompting. VoT aims to elicit spatial reasoning of LLMs by visualizing their reasoning traces, thereby guiding subsequent reasoning steps.


Mind the Gap: A Causal Perspective on Bias Amplification in Prediction & Decision-Making

Neural Information Processing Systems

As society increasingly relies on AI-based tools for decision-making in socially sensitive domains, investigating fairness and equity of such automated systems has become a critical field of inquiry. Most of the literature in fair machine learning focuses on defining and achieving fairness criteria in the context of prediction, while not explicitly focusing on how these predictions may be used later on in the pipeline. For instance, if commonly used criteria, such as independence or sufficiency, are satisfied for a prediction score S used for binary classification, they need not be satisfied after an application of a simple thresholding operation on S (as commonly used in practice). In this paper, we take an important step to address this issue in numerous statistical and causal notions of fairness. We introduce the notion of a margin complement, which measures how much a prediction score S changes due to a thresholding operation.We then demonstrate that the marginal difference in the optimal 0/1 predictor \widehat Y between groups, written P(\hat y \mid x_1) - P(\hat y \mid x_0), can be causally decomposed into the influences of X on the L_2 -optimal prediction score S and the influences of X on the margin complement M, along different causal pathways (direct, indirect, spurious).


Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning

Neural Information Processing Systems

Specifically, we show that different data modalities (e.g. Our systematic analysis demonstrates that this gap is caused by a combination of model initialization and contrastive learning optimization. In model initialization, we show empirically and theoretically that the representation of a common deep neural network is restricted to a narrow cone. As a consequence, in a multi-modal model with two encoders, the representations of the two modalities are clearly apart when the model is initialized. During optimization, contrastive learning keeps the different modalities separate by a certain distance, which is influenced by the temperature parameter in the loss function.


Re-evaluating Theory of Mind evaluation in large language models

Hu, Jennifer, Sosa, Felix, Ullman, Tomer

arXiv.org Artificial Intelligence

The question of whether large language models (LLMs) possess Theory of Mind (ToM) -- often defined as the ability to reason about others' mental states -- has sparked significant scientific and public interest. However, the evidence as to whether LLMs possess ToM is mixed, and the recent growth in evaluations has not resulted in a convergence. Here, we take inspiration from cognitive science to re-evaluate the state of ToM evaluation in LLMs. We argue that a major reason for the disagreement on whether LLMs have ToM is a lack of clarity on whether models should be expected to match human behaviors, or the computations underlying those behaviors. We also highlight ways in which current evaluations may be deviating from "pure" measurements of ToM abilities, which also contributes to the confusion. We conclude by discussing several directions for future research, including the relationship between ToM and pragmatic communication, which could advance our understanding of artificial systems as well as human cognition.