Goto

Collaborating Authors

 berlin





Russia-Ukraine war: List of key events, day 1,391

Al Jazeera

What is in the 28-point US plan for Ukraine? 'Ukraine is running out of men, money and time' Can the US get all sides to end the war? Why is Europe opposing Trump's peace plan? A Russian drone attack killed a 62-year-old Ukrainian man as he was riding a bicycle in the Velyka Pysarivka community of Ukraine's Sumy region, Governor Oleh Hryhorov said in a post on the Telegram messaging app. Russian forces launched 850 attacks on Ukraine's Zaporizhia region in a single day, injuring 14 people and damaging houses, cars and infrastructure, Governor Ivan Fedorov said on Telegram.


Russia-Ukraine war: List of key events, day 1,389

Al Jazeera

What is in the 28-point US plan for Ukraine? 'Ukraine is running out of men, money and time' Can the US get all sides to end the war? Why is Europe opposing Trump's peace plan? Two people were killed in a Ukrainian drone strike on the Russian city of Saratov, regional Governor Roman Busargin said in a statement on Telegram. An unspecified number of people were also injured in the attack.


LingGym: How Far Are LLMs from Thinking Like Field Linguists?

Yang, Changbing, Ma, Franklin, Shi, Freda, Zhu, Jian

arXiv.org Artificial Intelligence

This paper introduces LingGym, a new benchmark that evaluates LLMs' capacity for meta-linguistic reasoning using Interlinear Glossed Text (IGT) and grammatical descriptions extracted from 18 typologically diverse reference grammars. Unlike previous work that focuses on specific downstream tasks, we assess whether LLMs can generalize linguistic inference across low-resource languages and structures not seen during training. We present a controlled evaluation task: Word-Gloss Inference, in which the model must infer a missing word and gloss from context using varying levels of linguistic information (e.g., glosses, grammatical explanations, translations). Our results show that incorporating structured linguistic cues leads to consistent improvements in reasoning performance across all models. This work highlights both the promise and current limitations of using LLMs for typologically informed linguistic analysis and low-resource language documentation.


Self-Supervised Learning Strategies for a Platform to Test the Toxicity of New Chemicals and Materials

Lautenschlager, Thomas, Friederich, Nils, Sitcheu, Angelo Jovin Yamachui, Nau, Katja, Hayot, Gaëlle, Dickmeis, Thomas, Mikut, Ralf

arXiv.org Artificial Intelligence

High-throughput toxicity testing offers a fast and cost-effective way to test large amounts of compounds. A key component for such systems is the automated evaluation via machine learning models. In this paper, we address critical challenges in this domain and demonstrate how representations learned via self-supervised learning can effectively identify toxicant-induced changes. We provide a proof-of-concept that utilizes the publicly available EmbryoNet dataset, which contains ten zebrafish embryo phenotypes elicited by various chemical compounds targeting different processes in early embryonic development. Our analysis shows that the learned representations using self-supervised learning are suitable for effectively distinguishing between the modes-of-action of different compounds. Finally, we discuss the integration of machine learning models in a physical toxicity testing device in the context of the TOXBOX project.


A Reference LPF methods on AlpacaFarm 568 Having defined and validated the pairwise feedback simulator and evaluations in AlpacaFarm, we

Neural Information Processing Systems

A.1 Methods that directly learn from pairwise feedback To start, we describe the step of training the surrogate reward model. We adapt this approach in AlpacaFarm as a two-step method. In Appendix F, we include our preliminary study of multi-round expert iteration. We find exactly this result with the simulator. Figure 5: Our simulated annotators are cheap and match well with human annotators.


The best gadgets and gear we saw at IFA 2025 in Berlin

Popular Science

We may earn revenue from the products available on this page and participate in affiliate programs. Under the spidery lattice of Berlin's Funkturm radio tower, the IFA 2025 consumer electronics expo spills through every lair and level of the Messe grounds. These century-old show floors began as broadcasting industry exhibition halls and never stopped mutating. I have a diagram to guide me, but it might as well be a Zelda minimap leading me through this boss fight masquerading as a building. It's a series of architecture you operate: escalators are levers, skybridges, secret passages; many different doors, literal puzzle switches unlocking the next gear reveal .


Towards Skeletal and Signer Noise Reduction in Sign Language Production via Quaternion-Based Pose Encoding and Contrastive Learning

Fauré, Guilhem, Sadeghi, Mostafa, Bigeard, Sam, Ouni, Slim

arXiv.org Artificial Intelligence

One of the main challenges in neural sign language production (SLP) lies in the high intra-class variability of signs, arising from signer morphology and stylistic variety in the training data. To improve robustness to such variations, we propose two enhancements to the standard Progressive Transformers (PT) architecture (Saunders et al., 2020). First, we encode poses using bone rotations in quaternion space and train with a geodesic loss to improve the accuracy and clarity of angular joint movements. Second, we introduce a contrastive loss to structure decoder embeddings by semantic similarity, using either gloss overlap or SBERT-based sentence similarity, aiming to filter out anatomical and stylistic features that do not convey relevant semantic information. On the Phoenix14T dataset, the contrastive loss alone yields a 16% improvement in Probability of Correct Keypoint over the PT baseline. When combined with quaternion-based pose encoding, the model achieves a 6% reduction in Mean Bone Angle Error. These results point to the benefit of incorporating skeletal structure modeling and semantically guided contrastive objectives on sign pose representations into the training of Transformer-based SLP models.