Goto

Collaborating Authors

 aardvark


End-to-end data-driven weather prediction

AIHub

A new AI weather prediction system, developed by a team of researchers from the University of Cambridge, can deliver accurate forecasts which use less computing power than current AI and physics-based forecasting systems. The system, Aardvark Weather, has been supported by the Alan Turing Institute, Microsoft Research and the European Centre for Medium Range Weather Forecasts. It provides a blueprint for a new approach to weather forecasting with the potential to improve current practices. The results are reported in the journal Nature. "Aardvark reimagines current weather prediction methods offering the potential to make weather forecasts faster, cheaper, more flexible and more accurate than ever before, helping to transform weather prediction in both developed and developing countries," said Professor Richard Turner from Cambridge's Department of Engineering, who led the research.


AI can forecast the weather in seconds without needing supercomputers

New Scientist

An AI weather program running for a single second on a desktop can match the accuracy of traditional forecasts that take hours or days on powerful supercomputers, claim its creators. Weather forecasting has, since the 1950s, relied on physics-based models that extrapolate from observations made using satellites, balloons and weather stations. But these calculations, known as numerical weather prediction (NWP), are extremely intensive and rely on vast, expensive and energy-hungry supercomputers. Microsoft has a new quantum computer – but does it actually work? In recent years, researchers have tried to streamline this process by applying AI.


Automatic Curriculum Expert Iteration for Reliable LLM Reasoning

Zhao, Zirui, Dong, Hanze, Saha, Amrita, Xiong, Caiming, Sahoo, Doyen

arXiv.org Machine Learning

Hallucinations (i.e., generating plausible but inaccurate content) and laziness (i.e. excessive refusals or defaulting to "I don't know") persist as major challenges in LLM reasoning. Current efforts to reduce hallucinations primarily focus on factual errors in knowledge-grounded tasks, often neglecting hallucinations related to faulty reasoning. Meanwhile, some approaches render LLMs overly conservative, limiting their problem-solving capabilities. To mitigate hallucination and laziness in reasoning tasks, we propose Automatic Curriculum Expert Iteration (Auto-CEI) to enhance LLM reasoning and align responses to the model's capabilities--assertively answering within its limits and declining when tasks exceed them. In our method, Expert Iteration explores the reasoning trajectories near the LLM policy, guiding incorrect paths back on track to reduce compounding errors and improve robustness; it also promotes appropriate "I don't know" responses after sufficient reasoning attempts. The curriculum automatically adjusts rewards, incentivizing extended reasoning before acknowledging incapability, thereby pushing the limits of LLM reasoning and aligning its behaviour with these limits. We compare Auto-CEI with various SOTA baselines across logical reasoning, mathematics, and planning tasks, where Auto-CEI achieves superior alignment by effectively balancing assertiveness and conservativeness.