Goto

Collaborating Authors

 replicate






Flu Is Relentless. Crispr Might Be Able to Shut It Down

WIRED

Innovative research into the gene-editing tool targets influenza's ability to replicate--stopping it in its tracks. As he addressed an audience of virologists from China, Australia, and Singapore at October's Pandemic Research Alliance Symposium, Wei Zhao introduced an eye-catching idea. The gene-editing technology Crispr is best known for delivering groundbreaking new therapies for rare diseases, tweaking or knocking out rogue genes in conditions ranging from sickle cell disease to hemophilia . But Zhao and his colleagues at Melbourne's Peter Doherty Institute for Infection and Immunity have envisioned a new application. They believe Crispr could be tailored to create a next-generation treatment for influenza, whether that's the seasonal strains which plague both the northern and southern hemispheres on an annual basis, or the worrisome new variants in birds and other wildlife that might trigger the next pandemic.


The science of human touch – and why it's so hard to replicate in robots

Robohub

The science of human touch - and why it's so hard to replicate in robots Robots now see the world with an ease that once belonged only to science fiction. They can recognise objects, navigate cluttered spaces and sort thousands of parcels an hour. But ask a robot to touch something gently, safely or meaningfully, and the limits appear instantly. As a researcher in soft robotics working on artificial skin and sensorised bodies, I've found that trying to give robots a sense of touch forces us to confront just how astonishingly sophisticated human touch really is. My work began with the seemingly simple question of how robots might sense the world through their bodies.

  Country:
  Industry: Health & Medicine (0.70)

Understanding temperature tuning in energy-based models

Fields, Peter W, Ngampruetikorn, Vudtiwat, Schwab, David J, Palmer, Stephanie E

arXiv.org Artificial Intelligence

Energy-based models trained on evolutionary data can now generate novel protein sequences with custom functions [38]. A crucial, yet poorly understood, step in these successes is the use of an artificially low sampling "temperature" to produce functional sequences from the trained model. This adjustment is often the deciding factor between generating functional enzymes and inert polypeptides. A fundamental question arises as to what necessitates temperature tuning and what it reveals about the space of functional proteins and the limits of the models trained on finite data. Temperature tuning is a broadly used heuristic across machine learning contexts, used to improve training [16, 33, 34], generalization/generative performance [14, 45, 47, 48], and energy-landscape dynamics for memory retrieval [35]. It follows the basic intuition that one can navigate the trade-off between fidelity (producing believable, high-probability outputs at low temperature) and diversity (exploring a wide range of novel outputs at high temperature). Despite its widespread use, this practice lacks a principled, quantitative explanation and has not been systematically connected to known issues of the fitting procedure--particularly how it connects to fundamental limits in the learning process, such as biases introduced by training on finite data [5, 9, 10, 21, 22, 41].


Detecting Perspective Shifts in Multi-agent Systems

Bridgeford, Eric, Helm, Hayden

arXiv.org Artificial Intelligence

Generative models augmented with external tools and update mechanisms (or \textit{agents}) have demonstrated capabilities beyond intelligent prompting of base models. As agent use proliferates, dynamic multi-agent systems have naturally emerged. Recent work has investigated the theoretical and empirical properties of low-dimensional representations of agents based on query responses at a single time point. This paper introduces the Temporal Data Kernel Perspective Space (TDKPS), which jointly embeds agents across time, and proposes several novel hypothesis tests for detecting behavioral change at the agent- and group-level in black-box multi-agent systems. We characterize the empirical properties of our proposed tests, including their sensitivity to key hyperparameters, in simulations motivated by a multi-agent system of evolving digital personas. Finally, we demonstrate via natural experiment that our proposed tests detect changes that correlate sensitively, specifically, and significantly with a real exogenous event. As far as we are aware, TDKPS is the first principled framework for monitoring behavioral dynamics in black-box multi-agent systems -- a critical capability as generative agent deployment continues to scale.


Multiscale guidance of protein structure prediction with heterogeneous cryo-EM data

Raghu, Rishwanth, Levy, Axel, Wetzstein, Gordon, Zhong, Ellen D.

arXiv.org Artificial Intelligence

Protein structure prediction models are now capable of generating accurate 3D structural hypotheses from sequence alone. However, they routinely fail to capture the conformational diversity of dynamic biomolecular complexes, often requiring heuristic MSA subsampling approaches for generating alternative states. In parallel, cryo-electron microscopy (cryo-EM) has emerged as a powerful tool for imaging near-native structural heterogeneity, but is challenged by arduous pipelines to transform raw experimental data into atomic models. Here, we bridge the gap between these modalities, combining cryo-EM density maps with the rich sequence and biophysical priors learned by protein structure prediction models. Our method, CryoBoltz, guides the sampling trajectory of a pretrained biomolecular structure prediction model using both global and local structural constraints derived from density maps, driving predictions towards conformational states consistent with the experimental data. We demonstrate that this flexible yet powerful inference-time approach allows us to build atomic models into heterogeneous cryo-EM maps across a variety of dynamic biomolecular systems including transporters and antibodies. Code is available at https://github.com/ml-struct-bio/cryoboltz .


When Features Beat Noise: A Feature Selection Technique Through Noise-Based Hypothesis Testing

Sinha, Mousam, Ghosh, Tirtha Sarathi, Pal, Ridam

arXiv.org Machine Learning

Feature selection has remained a daunting challenge in machine learning and artificial intelligence, where increasingly complex, high-dimensional datasets demand principled strategies for isolating the most informative predictors. Despite widespread adoption, many established techniques suffer from notable limitations; some incur substantial computational cost, while others offer no definite statistical driven stopping criteria or assesses the significance of their importance scores. A common heuristic approach introduces multiple random noise features and retains all predictors ranked above the strongest noise feature. Although intuitive, this strategy lacks theoretical justification and depends heavily on heuristics. This paper proposes a novel feature selection method that addresses these limitations. Our approach introduces multiple random noise features and evaluates each feature's importance against the maximum importance value among these noise features incorporating a non-parametric bootstrap-based hypothesis testing framework to establish a solid theoretical foundation. We establish the conceptual soundness of our approach through statistical derivations that articulate the principles guiding the design of our algorithm. To evaluate its reliability, we generated simulated datasets under controlled statistical settings and benchmarked performance against Boruta and Knockoff-based methods, observing consistently stronger recovery of meaningful signal. As a demonstration of practical utility, we applied the technique across diverse real-world datasets, where it surpassed feature selection techniques including Boruta, RFE, and Extra Trees. Hence, the method emerges as a robust algorithm for principled feature selection, enabling the distillation of informative predictors that support reliable inference, enhanced predictive performance, and efficient computation.