rep
Machine learning in an expectation-maximisation framework for nowcasting
Wilsens, Paul, Antonio, Katrien, Claeskens, Gerda
Decision making often occurs in the presence of incomplete information, leading to the under- or overestimation of risk. Leveraging the observable information to learn the complete information is called nowcasting. In practice, incomplete information is often a consequence of reporting or observation delays. In this paper, we propose an expectation-maximisation (EM) framework for nowcasting that uses machine learning techniques to model both the occurrence as well as the reporting process of events. We allow for the inclusion of covariate information specific to the occurrence and reporting periods as well as characteristics related to the entity for which events occurred. We demonstrate how the maximisation step and the information flow between EM iterations can be tailored to leverage the predictive power of neural networks and (extreme) gradient boosting machines (XGBoost). With simulation experiments, we show that we can effectively model both the occurrence and reporting of events when dealing with high-dimensional covariate information. In the presence of non-linear effects, we show that our methodology outperforms existing EM-based nowcasting frameworks that use generalised linear models in the maximisation step. Finally, we apply the framework to the reporting of Argentinian Covid-19 cases, where the XGBoost-based approach again is most performant.
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.04)
- South America > Argentina > Pampas > Buenos Aires F.D. > Buenos Aires (0.04)
- Oceania > New Zealand (0.04)
- (2 more...)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Epidemiology (1.00)
- Banking & Finance > Insurance (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.88)
Buffer replay enhances the robustness of multimodal learning under missing-modality
Zhu, Hongye, Liu, Xuan, Ba, Yanwen, Xue, Jingye, Zhang, Shigeng
Missing modalities consistently lead to significant performance degradation in multimodal models. Existing approaches either synthesize missing modalities at high computational cost or apply prompt-based fine-tuning that relies only on adjacent-layer features and overlooks long-distance contextual information, which may offer additional tolerance to errors when one or more modalities are missing. To address this, we introduce REplay Prompting (REP): (1) construct modality-wise feature buffers via a residual bypass to cache early-layer representations and replay them in deeper layers, mitigating information loss as network depth increases; (2) employ a private-shared feature decoupling strategy, where private buffers preserve modality-specific signals and shared buffers encode cross-modal semantics; and (3) design a task-aware dynamic initialization mechanism to configure these buffers differently, improving stability and generalization under diverse missing-modality conditions. Experiments on vision-language, vision-language-audio, and temporal multimodal benchmarks demonstrate that REP consistently outperforms prior methods under both single- and multi-modality missing scenarios, while introducing only negligible parameter overhead. These results establish REP as a lightweight and effective paradigm for robust multimodal learning in challenging missing-modality environments.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- (2 more...)
mixture-main
The proof of the lemma follows from a simple application of Chernoff bound. Consider a matrix G of size m n where each entry is generated independently from a Bernoulli( p) distribution with p as a parameter. In this section, we prove the helper Lemmas 10 and 11 to compete the proof of Theorem 1 and also present the proof of Theorem 2. The two stage approximate recovery algorithm, as the name suggests, proceeds in two sequential steps. In the first stage, we recover the support of all the ` unknown vectors (presented in Algorithm 2 in Section 5). In the second stage, we use these deduced supports to approximately recover the unknown vectors (Algorithm 5 described in Section B.2). B.1 Support recovery (Missing proofs from Section 5) Compute |S ( i) | using Algorithm 3. First, we show how to compute |S ( i) | for every index i 2 [ n ] .
CrochetBench: Can Vision-Language Models Move from Describing to Doing in Crochet Domain?
Li, Peiyu, Huang, Xiaobao, Chawla, Nitesh V.
We present CrochetBench, a benchmark for evaluating the ability of multimodal large language models to perform fine-grained, low-level procedural reasoning in the domain of crochet. Unlike prior benchmarks that focus on high-level description or visual question answering, CrochetBench shifts the emphasis from describing to doing: models are required to recognize stitches, select structurally appropriate instructions, and generate compilable crochet procedures. We adopt the CrochetPARADE DSL as our intermediate representation, enabling structural validation and functional evaluation via execution. The benchmark covers tasks including stitch classification, instruction grounding, and both natural language and image-to-DSL translation. Across all tasks, performance sharply declines as the evaluation shifts from surface-level similarity to executable correctness, exposing limitations in long-range symbolic reasoning and 3D-aware procedural synthesis. CrochetBench offers a new lens for assessing procedural competence in multimodal models and highlights the gap between surface-level understanding and executable precision in real-world creative domains. Code is available at https://github.com/Peiyu-Georgia-Li/crochetBench.
- Research Report (1.00)
- Workflow (0.68)
Coupling Agent-based Modeling and Life Cycle Assessment to Analyze Trade-offs in Resilient Energy Transitions
Zhang, Beichen, Zaki, Mohammed T., Breunig, Hanna, Ajami, Newsha K.
Transitioning to sustainable and resilient energy systems requires navigating complex and interdependent trade-offs across environmental, social, and resource dimensions. Neglecting these trade-offs can lead to unintended consequences across sectors. However, existing assessments often evaluate emerging energy pathways and their impacts in silos, overlooking critical interactions such as regional resource competition and cumulative impacts. We present an integrated modeling framework that couples agent-based modeling and Life Cycle Assessment (LCA) to simulate how energy transition pathways interact with regional resource competition, ecological constraints, and community-level burdens. We apply the model to a case study in Southern California. The results demonstrate how integrated and multiscale decision making can shape energy pathway deployment and reveal spatially explicit trade-offs under scenario-driven constraints. This modeling framework can further support more adaptive and resilient energy transition planning on spatial and institutional scales.
- North America > United States > California > Riverside County (0.14)
- North America > United States > California > Imperial County (0.14)
- North America > United States > Colorado > Boulder County > Boulder (0.04)
- (7 more...)
Rep. Nancy Pelosi, trailblazing Democratic leader from San Francisco, won't seek reelection
Things to Do in L.A. Tap to enable a layout that focuses on the article. Rep. Nancy Pelosi of San Francisco, the former House speaker, said Thursday she will not seek another term. This is read by an automated voice. Please report any issues or inconsistencies here . The former House Speaker, in office since 1987, was facing multiple challengers in next year's Democratic primary.
- North America > United States > California > San Francisco County > San Francisco (0.76)
- North America > United States > California > Los Angeles County > Los Angeles (0.06)
- North America > United States > Wyoming (0.04)
- (6 more...)
- Summary/Review (0.49)
- Personal (0.48)
RKUM: An R Package for Robust Kernel Unsupervised Methods
RKUM is an R package developed for implementing robust kernel-based unsupervised methods. It provides functions for estimating the robust kernel covariance operator (CO) and the robust kernel cross-covariance operator (CCO) using generalized loss functions instead of the conventional quadratic loss. These operators form the foundation of robust kernel learning and enable reliable analysis under contaminated or noisy data conditions. The package includes implementations of robust kernel canonical correlation analysis (Kernel CCA), as well as the influence function (IF) for both standard and multiple kernel CCA frameworks. The influence function quantifies sensitivity and helps detect influential or outlying observations across two-view and multi-view datasets. Experiments using synthesized two-view and multi-view data demonstrate that the IF of the standard kernel CCA effectively identifies outliers, while the robust kernel methods implemented in RKUM exhibit reduced sensitivity to contamination. Overall, RKUM provides an efficient and extensible platform for robust kernel-based analysis in high-dimensional data applications.
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > New York (0.04)
- North America > United States > Massachusetts > Middlesex County > Reading (0.04)
- (6 more...)
REPA-E: Unlocking VAE for End-to-End Tuning with Latent Diffusion Transformers
Leng, Xingjian, Singh, Jaskirat, Hou, Yunzhong, Xing, Zhenchang, Xie, Saining, Zheng, Liang
In this paper we tackle a fundamental question: "Can we train latent diffusion models together with the variational auto-encoder (VAE) tokenizer in an end-to-end manner?" Traditional deep-learning wisdom dictates that end-to-end training is often preferable when possible. However, for latent diffusion transformers, it is observed that end-to-end training both VAE and diffusion-model using standard diffusion-loss is ineffective, even causing a degradation in final performance. We show that while diffusion loss is ineffective, end-to-end training can be unlocked through the representation-alignment (REPA) loss -- allowing both VAE and diffusion model to be jointly tuned during the training process. Despite its simplicity, the proposed training recipe (REPA-E) shows remarkable performance; speeding up diffusion model training by over 17x and 45x over REPA and vanilla training recipes, respectively. Interestingly, we observe that end-to-end tuning with REPA-E also improves the VAE itself; leading to improved latent space structure and downstream generation performance. In terms of final performance, our approach sets a new state-of-the-art; achieving FID of 1.12 and 1.69 with and without classifier-free guidance on ImageNet 256 x 256. Code is available at https://end2end-diffusion.github.io.
Challenges to Pelosi part of broader movement to replace the Democratic Party's old guard
Things to Do in L.A. Tap to enable a layout that focuses on the article. Challenges to Pelosi part of broader movement to replace the Democratic Party's old guard Rep. Nancy Pelosi, shown talking to reporters in the U.S. Capitol on Oct. 1, has not said whether she will seek another term in 2026. This is read by an automated voice. Please report any issues or inconsistencies here . Younger Democratic candidates are challenging older incumbents amid increasing frustration over the party's ineffective resistance to President Trump.
- Asia > Russia (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.08)
- North America > United States > California > Los Angeles County > Los Angeles (0.06)
- (17 more...)