Goto

Collaborating Authors

 Ehime Prefecture





Takeda's psoriasis pill developed with AI assistance succeeds in trials

The Japan Times

Takeda's psoriasis pill developed with AI assistance succeeds in trials Psoriasis is a chronic autoimmune disorder that causes rashes marked by itchy, scaly rashes and afflicts more than 125 million people worldwide. Takeda Pharmaceutical announced that its oral psoriasis drug zasocitinib proved safe and effective in late-stage trials, marking a milestone in its effort to treat the incurable skin condition and offset looming revenue pressure. Patients with moderate-to-severe plaque psoriasis who took the once-daily pill showed significantly clearer skin compared with those on placebo or the existing therapy apremilast, the company said in a statement Thursday. Takeda plans to submit data to the U.S. Food and Drug Administration and other regulators beginning in fiscal year 2026. If approved, zasocitinib would join the small but growing oral psoriasis treatments -- long a market dominated by ointments and injectable antibody therapies -- and stand out as one of the first drugs discovered with the help of artificial intelligence.


Arbitrage-Free Bond and Yield Curve Forecasting with Neural Filters under HJM Constraints

Gao, Xiang, Hyndman, Cody

arXiv.org Machine Learning

We develop an arbitrage-free deep learning framework for yield curve and bond price forecasting based on the Heath-Jarrow-Morton (HJM) term-structure model and a dynamic Nelson-Siegel parameterization of forward rates. Our approach embeds a no-arbitrage drift restriction into a neural state-space architecture by combining Kalman, extended Kalman, and particle filters with recurrent neural networks (LSTM/CLSTM), and introduces an explicit arbitrage error regularization (AER) term during training. The model is applied to U.S. Treasury and corporate bond data, and its performance is evaluated for both yield-space and price-space predictions at 1-day and 5-day horizons. Empirically, arbitrage regularization leads to its strongest improvements at short maturities, particularly in 5-day-ahead forecasts, increasing market-consistency as measured by bid-ask hit rates and reducing dollar-denominated prediction errors.

  Country:
  Genre: Research Report (0.82)
  Industry: Banking & Finance > Trading (1.00)

The One Where They Brain-Tune for Social Cognition: Multi-Modal Brain-Tuning on Friends

Policzer, Nico, Braunstein, Cameron, Toneva, Mariya

arXiv.org Artificial Intelligence

Recent studies on audio models show brain-tuning - fine-tuning models to better predict corresponding fMRI activity - improves brain alignment and increases performance on downstream semantic and audio tasks. We extend this approach to a multimodal audio-video model to enhance social cognition, targeting the Superior Temporal Sulcus (STS), a key region for social processing, while subjects watch Friends. We find significant increases in brain alignment to the STS and an adjacent ROI, as well as improvements to a social cognition task related to the training data - sarcasm detection in sitcoms. In summary, our study extends brain-tuning to the multi-modal domain, demonstrating improvements to a downstream task after tuning to a relevant functional region.


Automata-Conditioned Cooperative Multi-Agent Reinforcement Learning

Yalcinkaya, Beyazit, Vazquez-Chanlatte, Marcell, Shah, Ameesh, Krasowski, Hanna, Seshia, Sanjit A.

arXiv.org Artificial Intelligence

We study the problem of learning multi-task, multi-agent policies for cooperative, temporal objectives, under centralized training, decentralized execution. In this setting, using automata to represent tasks enables the decomposition of complex tasks into simpler sub-tasks that can be assigned to agents. However, existing approaches remain sample-inefficient and are limited to the single-task case. In this work, we present Automata-Conditioned Cooperative Multi-Agent Reinforcement Learning (ACC-MARL), a framework for learning task-conditioned, decentralized team policies. We identify the main challenges to ACC-MARL's feasibility in practice, propose solutions, and prove the correctness of our approach. We further show that the value functions of learned policies can be used to assign tasks optimally at test time. Experiments show emergent task-aware, multi-step coordination among agents, e.g., pressing a button to unlock a door, holding the door, and short-circuiting tasks.


Assessing the robustness of heterogeneous treatment effects in survival analysis under informative censoring

Wang, Yuxin, Frauen, Dennis, Schweisthal, Jonas, Schröder, Maresa, Feuerriegel, Stefan

arXiv.org Machine Learning

Dropout is common in clinical studies, with up to half of patients leaving early due to side effects or other reasons. When dropout is informative (i.e., dependent on survival time), it introduces censoring bias, because of which treatment effect estimates are also biased. In this paper, we propose an assumption-lean framework to assess the robustness of conditional average treatment effect (CATE) estimates in survival analysis when facing censoring bias. Unlike existing works that rely on strong assumptions, such as non-informative censoring, to obtain point estimation, we use partial identification to derive informative bounds on the CATE. Thereby, our framework helps to identify patient subgroups where treatment is effective despite informative censoring. We further develop a novel meta-learner that estimates the bounds using arbitrary machine learning models and with favorable theoretical properties, including double robustness and quasi-oracle efficiency. We demonstrate the practical value of our meta-learner through numerical experiments and in an application to a cancer drug trial. Together, our framework offers a practical tool for assessing the robustness of estimated treatment effects in the presence of censoring and thus promotes the reliable use of survival data for evidence generation in medicine and epidemiology.


Visual Cues Enhance Predictive Turn-Taking for Two-Party Human Interaction

Russell, Sam O'Connor, Harte, Naomi

arXiv.org Artificial Intelligence

Turn-taking is richly multimodal. Predictive turn-taking models (PTTMs) facilitate naturalistic human-robot interaction, yet most rely solely on speech. We introduce MM-VAP, a multimodal PTTM which combines speech with visual cues including facial expression, head pose and gaze. We find that it outperforms the state-of-the-art audio-only in videoconferencing interactions (84% vs. 79% hold/shift prediction accuracy). Unlike prior work which aggregates all holds and shifts, we group by duration of silence between turns. This reveals that through the inclusion of visual features, MM-VAP outperforms a state-of-the-art audio-only turn-taking model across all durations of speaker transitions. We conduct a detailed ablation study, which reveals that facial expression features contribute the most to model performance. Thus, our working hypothesis is that when interlocutors can see one another, visual cues are vital for turn-taking and must therefore be included for accurate turn-taking prediction. We additionally validate the suitability of automatic speech alignment for PTTM training using telephone speech. This work represents the first comprehensive analysis of multimodal PTTMs. We discuss implications for future work and make all code publicly available.


Modeling Turn-Taking with Semantically Informed Gestures

Suresh, Varsha, Mughal, M. Hamza, Theobalt, Christian, Demberg, Vera

arXiv.org Artificial Intelligence

In conversation, humans use multimodal cues, such as speech, gestures, and gaze, to manage turn-taking. While linguistic and acoustic features are informative, gestures provide complementary cues for modeling these transitions. To study this, we introduce DnD Gesture++, an extension of the multi-party DnD Gesture corpus enriched with 2,663 semantic gesture annotations spanning iconic, metaphoric, deictic, and discourse types. Using this dataset, we model turn-taking prediction through a Mixture-of-Experts framework integrating text, audio, and gestures. Experiments show that incorporating semantically guided gestures yields consistent performance gains over baselines, demonstrating their complementary role in multimodal turn-taking.