Goto

Collaborating Authors

 williamson




Dyslexia and the Reading Wars

The New Yorker

Proven methods for teaching the readers who struggle most have been known for decades. Why do we often fail to use them? "There's a window of opportunity to intervene," Mark Seidenberg, a cognitive neuroscientist, said. "You don't want to let that go." In 2024, my niece Caroline received a Ph.D. in gravitational-wave physics. Her research interests include "the impact of model inaccuracies on biases in parameters recovered from gravitational wave data" and "Petrov type, principal null directions, and Killing tensors of slowly rotating black holes in quadratic gravity." I watched a little of her dissertation defense, on Zoom, and was lost as soon as she'd finished introducing herself. She and her husband now live in Italy, where she has a postdoctoral appointment. Caroline's academic achievements seem especially impressive if you know that until third grade she could barely read: to her, words on a page looked like a pulsing mass. She attended a private school in Connecticut, and there was a set time every day when students selected books to read on their own. "I can't remember how long that lasted, but it felt endless," she told me. She hid her disability by turning pages when her classmates did, and by volunteering to draw illustrations during group story-writing projects. One day, she told her grandmother that she could sound out individual letters but when she got to "the end of a row" she couldn't remember what had come before. A psychologist eventually identified her condition as dyslexia. Fluent readers sometimes think of dyslexia as a tendency to put letters in the wrong order or facing the wrong direction, but it's more complicated than that.


Indictment of ex-Newsom aide hints at feds' probe into state's earlier investigation of video game giant

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. Dana Williamson, Gov. Gavin Newsom's former chief of staff, leaves the Robert T. Matsui United States Courthouse in Sacramento after being arrested in a federal public corruption probe involving multiple counts of bank and wire fraud on Wednesday. This is read by an automated voice. Please report any issues or inconsistencies here . Newsom's former chief of staff and two political operatives face federal corruption charges for fraud, including misusing campaign funds for luxury purchases.


Active Inference and Human--Computer Interaction

Murray-Smith, Roderick, Williamson, John H., Stein, Sebastian

arXiv.org Artificial Intelligence

Active Inference is a closed-loop computational theoretical basis for understanding behaviour, based on agents with internal probabilistic generative models that encode their beliefs about how hidden states in their environment cause their sensations. We review Active Inference and how it could be applied to model the human-computer interaction loop. Active Inference provides a coherent framework for managing generative models of humans, their environments, sensors and interface components. It informs off-line design and supports real-time, online adaptation. It provides model-based explanations for behaviours observed in HCI, and new tools to measure important concepts such as agency and engagement. We discuss how Active Inference offers a new basis for a theory of interaction in HCI, tools for design of modern, complex sensor-based systems, and integration of artificial intelligence technologies, enabling it to cope with diversity in human users and contexts. We discuss the practical challenges in implementing such Active Inference-based systems.


On Calibration in Multi-Distribution Learning

Verma, Rajeev, Fischer, Volker, Nalisnick, Eric

arXiv.org Artificial Intelligence

Modern challenges of robustness, fairness, and decision-making in machine learning have led to the formulation of multi-distribution learning (MDL) frameworks in which a predictor is optimized across multiple distributions. We study the calibration properties of MDL to better understand how the predictor performs uniformly across the multiple distributions. Through classical results on decomposing proper scoring losses, we first derive the Bayes optimal rule for MDL, demonstrating that it maximizes the generalized entropy of the associated loss function. Our analysis reveals that while this approach ensures minimal worst-case loss, it can lead to non-uniform calibration errors across the multiple distributions and there is an inherent calibration-refinement trade-off, even at Bayes optimality. Our results highlight a critical limitation: despite the promise of MDL, one must use caution when designing predictors tailored to multiple distributions so as to minimize disparity.


DNA links California man to 1979 cold case murder, years after passing lie detector

FOX News

Harvey Castro talks about how AI could be used in cold cases and the symbiotic relationship between AI and a detective. Riverside, California, investigators linked a man's DNA to a 1979 cold case murder of a teenage girl, years after the same man passed a lie detector test about the crime, according to authorities. The body of 17-year-old Esther Gonzalez was found dumped in packed snow off Highway 243 in Banning, California, in 1979, and after an investigation, detectives determined the teen had been raped and bludgeoned to death. Last week, the Riverside County District Attorney's Office said in a press release that the case had been solved using forensic genealogy, over 45 years later. On Nov. 20, the Riverside County Regional Cold Case Homicide Team identified Lewis Randolph "Randy" Williamson, who died in 2014, as the killer. NEWS ANCHOR'S MYSTERIOUS DISAPPEARANCE WAS CRIME OF'JEALOUSY': PRIVATE INVESTIGATOR Gonzalez was attacked and murdered on Feb. 9, 1979, as she was walking to her sister's house in Banning from her parent's house in Beaumont.


Scoring Rules and Calibration for Imprecise Probabilities

Fröhlich, Christian, Williamson, Robert C.

arXiv.org Artificial Intelligence

What does it mean to say that, for example, the probability for rain tomorrow is between 20% and 30%? The theory for the evaluation of precise probabilistic forecasts is well-developed and is grounded in the key concepts of proper scoring rules and calibration. For the case of imprecise probabilistic forecasts (sets of probabilities), such theory is still lacking. In this work, we therefore generalize proper scoring rules and calibration to the imprecise case. We develop these concepts as relative to data models and decision problems. As a consequence, the imprecision is embedded in a clear context. We establish a close link to the paradigm of (group) distributional robustness and in doing so provide new insights for it. We argue that proper scoring rules and calibration serve two distinct goals, which are aligned in the precise case, but intriguingly are not necessarily aligned in the imprecise case. The concept of decision-theoretic entropy plays a key role for both goals. Finally, we demonstrate the theoretical insights in machine learning practice, in particular we illustrate subtle pitfalls relating to the choice of loss function in distributional robustness.


Mamba2MIL: State Space Duality Based Multiple Instance Learning for Computational Pathology

Zhang, Yuqi, Zhang, Xiaoqian, Wang, Jiakai, Yang, Yuancheng, Peng, Taiying, Tong, Chao

arXiv.org Artificial Intelligence

Computational pathology (CPath) has significantly advanced the clinical practice of pathology. Despite the progress made, Multiple Instance Learning (MIL), a promising paradigm within CPath, continues to face challenges, particularly related to incomplete information utilization. Existing frameworks, such as those based on Convolutional Neural Networks (CNNs), attention, and selective scan space state sequential model (SSM), lack sufficient flexibility and scalability in fusing diverse features, and cannot effectively fuse diverse features. Additionally, current approaches do not adequately exploit order-related and order-independent features, resulting in suboptimal utilization of sequence information. To address these limitations, we propose a novel MIL framework called Mamba2MIL. Our framework utilizes the state space duality model (SSD) to model long sequences of patches of whole slide images (WSIs), which, combined with weighted feature selection, supports the fusion processing of more branching features and can be extended according to specific application needs. Moreover, we introduce a sequence transformation method tailored to varying WSI sizes, which enhances sequence-independent features while preserving local sequence information, thereby improving sequence information utilization. Extensive experiments demonstrate that Mamba2MIL surpasses state-of-the-art MIL methods. We conducted extensive experiments across multiple datasets, achieving improvements in nearly all performance metrics. Specifically, on the NSCLC dataset, Mamba2MIL achieves a binary tumor classification AUC of 0.9533 and an accuracy of 0.8794. On the BRACS dataset, it achieves a multiclass classification AUC of 0.7986 and an accuracy of 0.4981. The code is available at https://github.com/YuqiZhang-Buaa/Mamba2MIL.


Learning with Symmetric Label Noise: The Importance of Being Unhinged Brendan van Rooyen Aditya Krishna Menon †, ∗ Robert C. Williamson The Australian National University

Neural Information Processing Systems

Convex potential minimisation is the de facto approach to binary classification. However, Long and Servedio [2010] proved that under symmetric label noise (SLN), minimisation of any convex potential over a linear function class can result in classification performance equivalent to random guessing. This ostensibly shows that convex losses are not SLN-robust. In this paper, we propose a convex, classification-calibrated loss and prove that it is SLN-robust. The loss avoids the Long and Servedio [2010] result by virtue of being negatively unbounded. The loss is a modification of the hinge loss, where one does not clamp at zero; hence, we call it the unhinged loss.