Goto

Collaborating Authors

 aurora


A new CRISPR startup is betting regulators will ease up on gene-editing

MIT Technology Review

Aurora Therapeutics' first target is the rare inherited disease phenylketonuria, also known as PKU. Here at we've been writing about the gene-editing technology CRISPR since 2013, calling it the biggest biotech breakthrough of the century. Yet so far, there's been only one gene-editing drug approved. It's been used commercially on only about 40 patients, all with sickle-cell disease. It's becoming clear that the impact of CRISPR isn't as big as we all hoped. In fact, there's a pall of discouragement over the entire field--with some journalists saying the gene-editing revolution has " lost its mojo ."


AuroRA: Breaking Low-Rank Bottleneck of LoRA with Nonlinear Mapping

Dong, Haonan, Zhu, Wenhao, Song, Guojie, Wang, Liang

arXiv.org Artificial Intelligence

Low-Rank Adaptation (LoRA) is a widely adopted parameter-efficient fine-tuning (PEFT) method validated across NLP and CV domains. However, LoRA faces an inherent low-rank bottleneck: narrowing its performance gap with full finetuning requires increasing the rank of its parameter matrix, resulting in significant parameter overhead. Recent linear LoRA variants have attempted to enhance expressiveness by introducing additional linear mappings; however, their composition remains inherently linear and fails to fundamentally improve LoRA's representational capacity. To address this limitation, we propose AuroRA, which incorporates an Adaptive Nonlinear Layer (ANL) between two linear projectors to capture fixed and learnable nonlinearities. This combination forms an MLP-like structure with a compressed rank, enabling flexible and precise approximation of diverse target functions while theoretically guaranteeing lower approximation errors and bounded gradients. Extensive experiments on 22 datasets and 6 pretrained models demonstrate that AuroRA: (I) not only matches or surpasses full fine-tuning performance with only 6.18% ~ 25% of LoRA's parameters but also (II) outperforms competitive PEFT methods by up to 10.88% in both NLP and CV tasks, and (III) exhibits robust performance across various rank configurations.


ISS astronauts photograph two comets soaring over Earth's auroras

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. The interstellar comet 3I/ATLAS has captured the imaginations of both amateur and professional skygazers, but it's not the only icy space rock to recently speed past Earth. In October, a pair of comets known as Lemmon and SWAN also left trails of dust and gas as they continued along their vast orbits through the solar system. As luck had it, their timing perfectly aligned with a wave of vibrant auroras generated by one of this year's largest solar eruptions. And judging from NASA's recently released photos, few people had a better vantage point than the astronauts aboard the International Space Station.


Vector Quantized-Elites: Unsupervised and Problem-Agnostic Quality-Diversity Optimization

Tsakonas, Constantinos, Chatzilygeroudis, Konstantinos

arXiv.org Artificial Intelligence

Quality-Diversity algorithms have transformed optimization by prioritizing the discovery of diverse, high-performing solutions over a single optimal result. However, traditional Quality-Diversity methods, such as MAP-Elites, rely heavily on predefined behavior descriptors and complete prior knowledge of the task to define the behavior space grid, limiting their flexibility and applicability. In this work, we introduce Vector Quantized-Elites (VQ-Elites), a novel Quality-Diversity algorithm that autonomously constructs a structured behavior space grid using unsupervised learning, eliminating the need for prior task-specific knowledge. At the core of VQ-Elites is the integration of Vector Quantized Variational Autoencoders, which enables the dynamic learning of behavior descriptors and the generation of a structured, rather than unstructured, behavior space grid -- a significant advancement over existing unsupervised Quality-Diversity approaches. This design establishes VQ-Elites as a flexible, robust, and task-agnostic optimization framework. To further enhance the performance of unsupervised Quality-Diversity algorithms, we introduce behavior space bounding and cooperation mechanisms, which significantly improve convergence and performance, as well as the Effective Diversity Ratio and Coverage Diversity Score, two novel metrics that quantify the actual diversity in the unsupervised setting. We validate VQ-Elites on robotic arm pose-reaching, mobile robot space-covering, and MiniGrid exploration tasks. The results demonstrate its ability to efficiently generate diverse, high-quality solutions, emphasizing its adaptability, scalability, robustness to hyperparameters, and potential to extend Quality-Diversity optimization to complex, previously inaccessible domains.




Physical Consistency of Aurora's Encoder: A Quantitative Study

Richards, Benjamin, Balan, Pushpa Kumar

arXiv.org Artificial Intelligence

The high accuracy of large-scale weather forecasting models like Aurora is often accompanied by a lack of transparency, as their internal representations remain largely opaque. This "black box" nature hinders their adoption in high-stakes operational settings. In this work, we probe the physical consistency of Aurora's encoder by investigating whether its latent representations align with known physical and meteorological concepts. Using a large-scale dataset of embeddings, we train linear classifiers to identify three distinct concepts: the fundamental land-sea boundary, high-impact extreme temperature events, and atmospheric instability. Our findings provide quantitative evidence that Aurora learns physically consistent features, while also highlighting its limitations in capturing the rarest events. This work underscores the critical need for interpretability methods to validate and build trust in the next generation of Al-driven weather models.


How preppers plan to save us if the whole internet collapses

New Scientist

Recent outages have revealed how vulnerable the internet is, but there seems to be no official plan in the event of a catastrophic failure. Vladimir Lenin is said to have warned that all societies are three square meals from chaos. But in the modern world, it is only a Wi-Fi signal that separates us from anarchy. Every aspect of our lives is reliant on computers and the internet, and when they fail, they do so with disorientating speed. This became abundantly clear during power cuts across Spain and Portugal earlier this year.



Task-Adaptive Parameter-Efficient Fine-Tuning for Weather Foundation Models

Cao, Shilei, Lin, Hehai, Cheng, Jiashun, Liu, Yang, Li, Guowen, Wang, Xuehe, Zheng, Juepeng, Liang, Haoyuan, Jin, Meng, Qin, Chengwei, Cheng, Hong, Fu, Haohuan

arXiv.org Artificial Intelligence

While recent advances in machine learning have equipped Weather Foundation Models (WFMs) with substantial generalization capabilities across diverse downstream tasks, the escalating computational requirements associated with their expanding scale increasingly hinder practical deployment. Current Parameter-Efficient Fine-Tuning (PEFT) methods, designed for vision or language tasks, fail to address the unique challenges of weather downstream tasks, such as variable heterogeneity, resolution diversity, and spatiotemporal coverage variations, leading to suboptimal performance when applied to WFMs. To bridge this gap, we introduce WeatherPEFT, a novel PEFT framework for WFMs incorporating two synergistic innovations. First, during the forward pass, Task-Adaptive Dynamic Prompting (TADP) dynamically injects the embedding weights within the encoder to the input tokens of the pre-trained backbone via internal and external pattern extraction, enabling context-aware feature recalibration for specific downstream tasks. Furthermore, during backpropagation, Stochastic Fisher-Guided Adaptive Selection (SFAS) not only leverages Fisher information to identify and update the most task-critical parameters, thereby preserving invariant pre-trained knowledge, but also introduces randomness to stabilize the selection. We demonstrate the effectiveness and efficiency of WeatherPEFT on three downstream tasks, where existing PEFT methods show significant gaps versus Full-Tuning, and WeatherPEFT achieves performance parity with Full-Tuning using fewer trainable parameters. The code of this work will be released.