augur
Augur: Modeling Covariate Causal Associations in Time Series via Large Language Models
Cui, Zhiqing, Wang, Binwu, Liu, Qingxiang, Wang, Yeqiang, Zhou, Zhengyang, Liang, Yuxuan, Wang, Yang
Large language models (LLM) have emerged as a promising avenue for time series forecasting, offering the potential to integrate multimodal data. However, existing LLM-based approaches face notable limitations-such as marginalized role in model architectures, reliance on coarse statistical text prompts, and lack of interpretability. In this work, we introduce Augur, a fully LLM driven time series forecasting framework that exploits LLM causal reasoning to discover and use directed causal associations among covariates. Augur uses a two stage teacher student architecture where a powerful teacher LLM infers a directed causal graph from time series using heuristic search together with pairwise causality testing. A lightweight student agent then refines the graph and fine tune on high confidence causal associations that are encoded as rich textual prompts to perform forecasting. This design improves predictive accuracy while yielding transparent, traceable reasoning about variable interactions. Extensive experiments on real-world datasets with 26 baselines demonstrate that Augur achieves competitive performance and robust zero-shot generalization.
- Oceania > New Zealand (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- (3 more...)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The contribution of this paper is probabilistic programming language that supports parallel inference for graphical models (specifically Bayes nets). Probabilistic programming languages are powerful tools because they allow rapid development of new models without having to derive/implement new inference algorithms. Unlike most existing probabilistic programming languages, Augur produces massively parallel code that can run on a GPU (using CUDA). A unique feature of Augur is that it compiles the model (specified in the language Scala) into an intermediate representation before it's ultimately compiled into a CUDA inference algorithm for parallelization.
- South America > Paraguay > Asunción > Asunción (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.70)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.50)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.50)
Augur: Data-Parallel Probabilistic Modeling
Implementing inference procedures for each new probabilistic model is time-consuming and error-prone. Probabilistic programming addresses this problem by allowing a user to specify the model and then automatically generating the inference procedure. To make this practical it is important to generate high performance inference code. In turn, on modern architectures, high performance requires parallel execution. In this paper we present Augur, a probabilistic modeling language and compiler for Bayesian networks designed to make effective use of data-parallel architectures such as GPUs. We show that the compiler can generate data-parallel inference code scalable to thousands of GPU cores by making use of the conditional independence relationships in the Bayesian network.
Augur: Data-Parallel Probabilistic Modeling
Jean-Baptiste Tristan, Daniel Huang, Joseph Tassarotti, Adam C. Pocock, Stephen Green, Guy L. Steele
Implementing inference procedures for each new probabilistic model is timeconsuming and error-prone. Probabilistic programming addresses this problem by allowing a user to specify the model and then automatically generating the inference procedure. To make this practical it is important to generate high performance inference code. In turn, on modern architectures, high performance requires parallel execution. In this paper we present Augur, a probabilistic modeling language and compiler for Bayesian networks designed to make effective use of data-parallel architectures such as GPUs. We show that the compiler can generate data-parallel inference code scalable to thousands of GPU cores by making use of the conditional independence relationships in the Bayesian network.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Asia > Middle East > Jordan (0.04)
Augur: Data-Parallel Probabilistic Modeling
Implementing inference procedures for each new probabilistic model is time-consuming and error-prone. Probabilistic programming addresses this problem by allowing a user to specify the model and then automatically generating the inference procedure. To make this practical it is important to generate high performance inference code. In turn, on modern architectures, high performance requires parallel execution. In this paper we present Augur, a probabilistic modeling language and compiler for Bayesian networks designed to make effective use of data-parallel architectures such as GPUs.
AUGUR, A flexible and efficient optimization algorithm for identification of optimal adsorption sites
Kouroudis, Ioannis, Poonam, null, Misciaci, Neel, Mayr, Felix, Müller, Leon, Gu, Zhaosu, Gagliardi, Alessio
Novel, functional structures at the nanoscale could be crucial for transforming a broad spectrum of economically significant processes into greener and more sustainable solutions. For instance, nanostructured materials hold the potential to significantly enhance the cost-effectiveness of fuel-cell devices [1], enable the creation of highly efficient quantum-dot LEDs [2], and pave the way for generating atom-precise efficient nanocatalysts for studying novel catalytic pathways in electrochemical applications [3, 4]. As performance is highly dependent on specific structural characteristics which often can not easily be resolved in lab experiments, computational chemistry - most often by using Density Functional Theory (DFT) based approaches - can be used to generate in-silico insights. Typical questions range from elucidating which feature of a given nanoparticle might improve catalytic performance to mechanistic explanations for key synthesis procedures, allowing tailored experiments to drive up experimental yields for optimal structures. Commonly, these questions are associated with finding energetically favorable configurations for the potential energy surface (PES) of a system, which is a property relevant to solving a wide range of problems in computational chemistry. The established methodology allows finding "docking" mechanisms between small molecules and large biomolecules, which is relevant for drug development [5]. Additionally, a large area of research revolves around the sensing of harmful gases by novel nanomaterials chosen according to their strength of interactions.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Middle East > Israel (0.04)
- Energy (0.86)
- Materials > Chemicals (0.46)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.34)
Augur: Data-Parallel Probabilistic Modeling Jean-Baptiste Tristan, Daniel Huang
Implementing inference procedures for each new probabilistic model is timeconsuming and error-prone. Probabilistic programming addresses this problem by allowing a user to specify the model and then automatically generating the inference procedure. To make this practical it is important to generate high performance inference code. In turn, on modern architectures, high performance requires parallel execution. In this paper we present Augur, a probabilistic modeling language and compiler for Bayesian networks designed to make effective use of data-parallel architectures such as GPUs. We show that the compiler can generate data-parallel inference code scalable to thousands of GPU cores by making use of the conditional independence relationships in the Bayesian network.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Asia > Middle East > Jordan (0.04)
Augur: Data-Parallel Probabilistic Modeling
Tristan, Jean-Baptiste, Huang, Daniel, Tassarotti, Joseph, Pocock, Adam C., Green, Stephen, Steele, Guy L.
Implementing inference procedures for each new probabilistic model is time-consuming and error-prone. Probabilistic programming addresses this problem by allowing a user to specify the model and then automatically generating the inference procedure. To make this practical it is important to generate high performance inference code. In turn, on modern architectures, high performance requires parallel execution. In this paper we present Augur, a probabilistic modeling language and compiler for Bayesian networks designed to make effective use of data-parallel architectures such as GPUs.
Stanford researchers using Toronto-based Wattpad's stories to inform artificial intelligence
If you are one of the 40 million people who enjoy reading or writing the mostly romantic werewolf, superhero or historical fiction stories found on Canadian startup Wattpad, you may also be contributing to the development of the next generation of artificial intelligence. In a new paper called Augur: Mining Human Behaviors from Fiction to Power Interactive Systems, a group of Stanford University computer science researchers revealed that they used the Wattpad "corpus" – a collection of almost two billion words (or 600,000 chapters) written by regular people – to help a computer understand the world around it. The team intends to make the program they built, Augur, into an open-source tool that other researchers can build on. "The basic idea is that it's very difficult to program computers to understand the broad range of things that people do," says fourth-year PhD student Ethan Fast, co-author of the paper (published as part of the upcoming Computer Human Interaction conference) and a member of Stanford's Human-Computer Interaction Group. "Fiction has a lot of useful things to say about the world, and if you have enough of it, you can model it in much more depth than you could hope to manually."
- North America > Canada > Ontario > Toronto (0.52)
- Asia > Middle East > Jordan (0.05)