Goto

Collaborating Authors

 Industry


A Probabilistic Model of Social Decision Making based on Reward Maximization

Neural Information Processing Systems

A fundamental problem in cognitive neuroscience is how humans make decisions, act, and behave in relation to other humans. Here we adopt the hypothesis that when we are in an interactive social setting, our brains perform Bayesian inference of the intentions and cooperativeness of others using probabilistic representations. We employ the framework of partially observable Markov decision processes (POMDPs) to model human decision making in a social context, focusing specifically on the volunteer's dilemma in a version of the classic Public Goods Game. We show that the POMDP model explains both the behavior of subjects as well as neural activity recorded using fMRI during the game. The decisions of subjects can be modeled across all trials using two interpretable parameters. Furthermore, the expected reward predicted by the model for each subject was correlated with the activation of brain areas related to reward expectation in social interactions. Our results suggest a probabilistic basis for human social decision making within the framework of expected reward maximization.


Dialog-based Language Learning

Neural Information Processing Systems

A long-term goal of machine learning research is to build an intelligent dialog agent. Most research in natural language understanding has focused on learning from fixed training sets of labeled data, with supervision either at the word level (tagging, parsing tasks) or sentence level (question answering, machine translation). This kind of supervision is not realistic of how humans learn, where language is both learned by, and used for, communication. In this work, we study dialog-based language learning, where supervision is given naturally and implicitly in the response of the dialog partner during the conversation. We study this setup in two domains: the bAbI dataset of (Weston et al., 2015) and large-scale question answering from (Dodge et al., 2015). We evaluate a set of baseline learning strategies on these tasks, and show that a novel model incorporating predictive lookahead is a promising approach for learning from a teacher's response. In particular, a surprising result is that it can learn to answer questions correctly without any reward-based supervision at all.


A Non-parametric Learning Method for Confidently Estimating Patient's Clinical State and Dynamics

Neural Information Processing Systems

Estimating patient's clinical state from multiple concurrent physiological streams plays an important role in determining if a therapeutic intervention is necessary and for triaging patients in the hospital. In this paper we construct a non-parametric learning algorithm to estimate the clinical state of a patient. The algorithm addresses several known challenges with clinical state estimation such as eliminating bias introduced by therapeutic intervention censoring, increasing the timeliness of state estimation while ensuring a sufficient accuracy, and the ability to detect anomalous clinical states. These benefits are obtained by combining the tools of non-parametric Bayesian inference, permutation testing, and generalizations of the empirical Bernstein inequality. The algorithm is validated using real-world data from a cancer ward in a large academic hospital.


Fast Distributed Submodular Cover: Public-Private Data Summarization

Neural Information Processing Systems

In this paper, we introduce the public-private framework of data summarization motivated by privacy concerns in personalized recommender systems and online social services. Such systems have usually access to massive data generated by a large pool of users. A major fraction of the data is public and is visible to (and can be used for) all users. However, each user can also contribute some private data that should not be shared with other users to ensure her privacy. The goal is to provide a succinct summary of massive dataset, ideally as small as possible, from which customized summaries can be built for each user, i.e. it can contain elements from the public data (for diversity) and users' private data (for personalization). To formalize the above challenge, we assume that the scoring function according to which a user evaluates the utility of her summary satisfies submodularity, a widely used notion in data summarization applications.


What was Doge? How Elon Musk tried to gamify government

The Guardian

In 2025, when Elon Musk joined the government as the de facto head of something called the "department of government efficiency", he declared that governments were poorly configured "big dumb machines". To the senator Ted Cruz, he explained that "the only way to reconcile the databases and get rid of waste and fraud is to actually look at the computers". Muskism came to Washington soaked in memes, adolescent boasts and sadistic victory dances over mass firings. Leading a team of teenage coders and mid-level managers drawn from his suite of companies, Musk aimed to enter the codebase and rewrite regulations and budget lines from within. He would drag the paper-pushing bureaucracy kicking and screaming into the digital 21st century, scanning the contents of cavernous rooms of filing cabinets and feeding the data into a single interoperable system. The undertaking combined features of private equity-led restructuring with startup management, shot through with the sensibility of gaming and rightwing culture war. To succeed, he would need "God mode", an overview of the whole. If the mandate of Doge was to "[modernise] federal technology and software to maximise governmental efficiency and productivity", in the words of the executive order that launched the initiative on 20 January 2025, the reality was a strengthening of the state's surveillance capacities. Over time, Musk had become convinced that the real bugs in the code were people, especially the non-white illegal immigrants whom he saw as pawns in a liberal scheme to corrupt democracy and beneficiaries of what he called "suicidal empathy". He understood empathy itself in coding terms.


Tennessee minors sue Musk's xAI, alleging Grok generated sexual images of them

The Japan Times

Tennessee minors sue Musk's xAI, alleging Grok generated sexual images of them Governments and regulators around the world have launched probes into xAI, imposed bans and demanded safeguards in a growing push to curb illegal and offensive material. Three Tennessee plaintiffs, including two minors, sued Elon Musk's xAI on Monday, alleging that it knowingly designed its Grok image generator to let people create sexually explicit content by using real photos of others. The lawsuit, filed in the San Jose, California federal court, is seeking class-action status for people in the United States who were reasonably identifiable in sexualized images or videos generated by Grok based on real images of themselves. The artificial intelligence company did not immediately respond to a request for comment. After an outcry over sexually explicit content generated by the chatbot, xAI said in January that it had blocked all users from editing images of real people in revealing clothing and from generating images of people in revealing clothing in jurisdictions where it's illegal. Governments and regulators around the world have also since launched probes, imposed bans and demanded safeguards in a growing push to curb illegal and offensive material.


Equality of Opportunity in Classification: A Causal Approach

Neural Information Processing Systems

The Equalized Odds (for short, EO) is one of the most popular measures of discrimination used in the supervised learning setting. It ascertains fairness through the balance of the misclassification rates (false positive and negative) across the protected groups -- e.g., in the context of law enforcement, an African-American defendant who would not commit a future crime will have an equal opportunity of being released, compared to a non-recidivating Caucasian defendant. Despite this noble goal, it has been acknowledged in the literature that statistical tests based on the EO are oblivious to the underlying causal mechanisms that generated the disparity in the first place (Hardt et al. 2016). This leads to a critical disconnect between statistical measures readable from the data and the meaning of discrimination in the legal system, where compelling evidence that the observed disparity is tied to a specific causal process deemed unfair by society is required to characterize discrimination. The goal of this paper is to develop a principled approach to connect the statistical disparities characterized by the EO and the underlying, elusive, and frequently unobserved, causal mechanisms that generated such inequality. We start by introducing a new family of counterfactual measures that allows one to explain the misclassification disparities in terms of the underlying mechanisms in an arbitrary, non-parametric structural causal model. This will, in turn, allow legal and data analysts to interpret currently deployed classifiers through causal lens, linking the statistical disparities found in the data to the corresponding causal processes. Leveraging the new family of counterfactual measures, we develop a learning procedure to construct a classifier that is statistically efficient, interpretable, and compatible with the basic human intuition of fairness. We demonstrate our results through experiments in both real (COMPAS) and synthetic datasets.


Fast deep reinforcement learning using online adjustments from the past

Neural Information Processing Systems

We propose Ephemeral Value Adjusments (EVA): a means of allowing deep reinforcement learning agents to rapidly adapt to experience in their replay buffer. EVA shifts the value predicted by a neural network with an estimate of the value function found by prioritised sweeping over experience tuples from the replay buffer near the current state. EVA combines a number of recent ideas around combining episodic memory-like structures into reinforcement learning agents: slot-based storage, content-based retrieval, and memory-based planning. We show that EVA is performant on a demonstration task and Atari games.


U.S. court rules against South Korean gaming firm over AI-hatched takeover plan

The Japan Times

A U.S. judge has ordered South Korean game developer Krafton to reinstate the head of one of its video game studios after ruling that he had been improperly removed as part of a takeover plan hatched by ChatGPT. WILMINGTON, DELAWARE - A Delaware judge on Monday ordered that South Korean game developer Krafton reinstate the head of one of its video game studios, ruling he had been improperly removed as part of a takeover plan hatched by ChatGPT. Krafton CEO Changhan Kim had largely followed the advice of artificial intelligence tool ChatGPT during a $250 million dispute with the leaders of the Subnautica game maker Unknown Worlds Entertainment, which Krafton had acquired, according to the ruling by Vice Chancellor Lori Will of the Court of Chancery in Delaware. Businesses and governments are scrambling for new ways to use AI, and the technology has been blamed for mass layoffs, fears of autonomous weapons and concerns about civil rights. Companies caught in takeover-related legal battles often spend millions of dollars on teams of attorneys and advisers from top-flight Wall Street firms. In a time of both misinformation and too much information, quality journalism is more crucial than ever.


Hybrid Retrieval-Generation Reinforced Agent for Medical Image Report Generation

Neural Information Processing Systems

Generating long and coherent reports to describe medical images poses challenges to bridging visual patterns with informative human linguistic descriptions. We propose a novel Hybrid Retrieval-Generation Reinforced Agent (HRGR-Agent) which reconciles traditional retrieval-based approaches populated with human prior knowledge, with modern learning-based approaches to achieve structured, robust, and diverse report generation. HRGR-Agent employs a hierarchical decision-making procedure. For each sentence, a high-level retrieval policy module chooses to either retrieve a template sentence from an off-the-shelf template database, or invoke a low-level generation module to generate a new sentence. HRGR-Agent is updated via reinforcement learning, guided by sentence-level and word-level rewards. Experiments show that our approach achieves the state-of-the-art results on two medical report datasets, generating well-balanced structured sentences with robust coverage of heterogeneous medical report contents. In addition, our model achieves the highest detection precision of medical abnormality terminologies, and improved human evaluation performance.