Goto

Collaborating Authors

 gsp




Gibbs Sampling with People

Neural Information Processing Systems

A core problem in cognitive science and machine learning is to understand how humans derive semantic representations from perceptual objects, such as color from an apple, pleasantness from a musical chord, or seriousness from a face. Markov Chain Monte Carlo with People (MCMCP) is a prominent method for studying such representations, in which participants are presented with binary choice trials constructed such that the decisions follow a Markov Chain Monte Carlo acceptance rule. However, while MCMCP has strong asymptotic properties, its binary choice paradigm generates relatively little information per trial, and its local proposal function makes it slow to explore the parameter space and find the modes of the distribution. Here we therefore generalize MCMCP to a continuous-sampling paradigm, where in each iteration the participant uses a slider to continuously manipulate a single stimulus dimension to optimize a given criterion such as'pleasantness'. We formulate both methods from a utility-theory perspective, and show that the new method can be interpreted as'Gibbs Sampling with People' (GSP). Further, we introduce an aggregation parameter to the transition step, and show that this parameter can be manipulated to flexibly shift between Gibbs sampling and deterministic optimization. In an initial study, we show GSP clearly outperforming MCMCP; we then show that GSP provides novel and interpretable results in three other domains, namely musical chords, vocal emotions, and faces.



7880d7226e872b776d8b9f23975e2a3d-AuthorFeedback.pdf

Neural Information Processing Systems

We have addressed the reviewers' comments by running seven new experiments, which shed useful new light on some of R2: GSP seems intuitively dependent on parametrization, can you discuss? R3: Does the benefit of aggregation disappear once you take into account the number of responses required? R3: How do the experimenters avoid subjects merely making the same response 10 times? R3: It would be worth discussing how the technique differs from e.g. GSP is more mode-seeking than MCMCP, but nonetheless recovers the utility function more reliably (Fig. D).


Geo-Semantic-Parsing: AI-powered geoparsing by traversing semantic knowledge graphs

Nizzoli, Leonardo, Avvenuti, Marco, Tesconi, Maurizio, Cresci, Stefano

arXiv.org Artificial Intelligence

Online Social Networks (OSN) are privileged observation channels for understanding the geospatial facets of many real-world phenomena [1]. Unfortunately, in most cases OSN content lacks explicit and structured geographic information, as in the case of Twitter, where only a minimal fraction (1% to 4%) of messages are natively geotagged [2]. This shortage of explicit geographic information drastically limits the exploitation of OSN data in geospatial Decision Support Systems (DSS) [3]. Conversely, the prompt availability of geotagged content would empower existing systems and would open up the possibility to develop new and better geospatial services and applications [4, 5]. As a practical example of this kind, several social media-based systems have been proposed in recent years for mapping and visualizing situational information in the aftermath of mass disasters - a task dubbed as crisis mapping - in an effort to augment emergency response [6, 7]. These systems, however, demand geotagged data to be placed on crisis maps, which in turn imposes to perform the geoparsing task on the majority of social media content. Explicit geographic information is not only needed in early warning [8, 9] and emergency response systems [10, 11, 12, 13, 14], but also in systems and applications for improving event promotion [15, 16], touristic planning [17, 18, 19], healthcare accessibility [20], news aggregation [21] Post-print of the article published in Decision Support Systems 136, 2020. Please refer to the published version: doi.org/10.1016/j.dss.2020.113346


Review for NeurIPS paper: Gibbs Sampling with People

Neural Information Processing Systems

This paper introduces a new method for eliciting human representations of perceptual concepts, such as what RGB values people think correspond to the color "sunset" or what auditory dimensions (e.g. Rather than eliciting representations via guess-and-check (i.e., start with a dataset and then apply human-generated labels), this method (Gibbs Sampling with People, or GSP) enables inference to go in the other direction (i.e., start with labels, and then identify percepts that match those labels). GSP extends prior work (MCMC with People) to allow eliciting representations of much higher-dimensional stimuli. The reviewers unanimously praised this paper for tackling an important and relevant problem in cognitive science, for its breadth of empirical results, and for its novelty over prior work. R2 stated that the paper is "impressive in scale, scope, and results", R3 stated that it was "very relevant to the NeurIPS community and very novel", and R4 felt there could be "a potentially large impact of this work" with "substantial interest" amongst the NeurIPS community.


Gibbs Sampling with People

Neural Information Processing Systems

A core problem in cognitive science and machine learning is to understand how humans derive semantic representations from perceptual objects, such as color from an apple, pleasantness from a musical chord, or seriousness from a face. Markov Chain Monte Carlo with People (MCMCP) is a prominent method for studying such representations, in which participants are presented with binary choice trials constructed such that the decisions follow a Markov Chain Monte Carlo acceptance rule. However, while MCMCP has strong asymptotic properties, its binary choice paradigm generates relatively little information per trial, and its local proposal function makes it slow to explore the parameter space and find the modes of the distribution. Here we therefore generalize MCMCP to a continuous-sampling paradigm, where in each iteration the participant uses a slider to continuously manipulate a single stimulus dimension to optimize a given criterion such as'pleasantness'. We formulate both methods from a utility-theory perspective, and show that the new method can be interpreted as'Gibbs Sampling with People' (GSP).


A New View on Planning in Online Reinforcement Learning

Roice, Kevin, Panahi, Parham Mohammad, Jordan, Scott M., White, Adam, White, Martha

arXiv.org Artificial Intelligence

This paper investigates a new approach to model-based reinforcement learning using background planning: mixing (approximate) dynamic programming updates and model-free updates, similar to the Dyna architecture. Background planning with learned models is often worse than model-free alternatives, such as Double DQN, even though the former uses significantly more memory and computation. The fundamental problem is that learned models can be inaccurate and often generate invalid states, especially when iterated many steps. In this paper, we avoid this limitation by constraining background planning to a set of (abstract) subgoals and learning only local, subgoal-conditioned models. This goal-space planning (GSP) approach is more computationally efficient, naturally incorporates temporal abstraction for faster long-horizon planning and avoids learning the transition dynamics entirely. We show that our GSP algorithm can propagate value from an abstract space in a manner that helps a variety of base learners learn significantly faster in different domains.


Artificial Intelligence for Multi-Unit Auction design

Khezr, Peyman, Taylor, Kendall

arXiv.org Artificial Intelligence

Understanding bidding behavior in multi-unit auctions remains an ongoing challenge for researchers. Despite their widespread use, theoretical insights into the bidding behavior, revenue ranking, and efficiency of commonly used multi-unit auctions are limited. This paper utilizes artificial intelligence, specifically reinforcement learning, as a model free learning approach to simulate bidding in three prominent multi-unit auctions employed in practice. We introduce six algorithms that are suitable for learning and bidding in multi-unit auctions and compare them using an illustrative example. This paper underscores the significance of using artificial intelligence in auction design, particularly in enhancing the design of multi-unit auctions.