Goto

Collaborating Authors

 zurich


Game On: the Swiss sports brand using hi-tech and chutzpah to challenge Nike and Adidas

The Guardian

Zurich-based firm taps into latest robot tech to'fibre-spray' high-end sports shoes worn by the likes of Roger Federer A robot leg whirs around in a complex ballet as an almost invisible spray of "flying fibre" builds a hi-tech £300 sports shoe at its foot. This nearly entirely automated process - like a sci-fi future brought to life - is part of the gameplan from On, the Swiss sports brand that is taking on the sector's mighty champions Nike and Adidas with a mix of technology and chutzpah. The brand is expanding rapidly after teaming up with the former tennis pro Roger Federer to create shoes suitable for the Swiss star's sport and a mix of fashion-led collaborations including with the luxury brand LOEWE, actor Zendaya and singers FKA twigs and Burna Boy. In China, sales have doubled year-on-year. Growth has been strong in the US and mainland Europe and this month On will open its fourth London store, in Kensington.


Invariant Price of Anarchy: a Metric for Welfarist Traffic Control

Shilov, Ilia, He, Mingjia, Nax, Heinrich H., Frazzoli, Emilio, Zardini, Gioele, Bolognani, Saverio

arXiv.org Artificial Intelligence

The Price of Anarchy (PoA) is a standard metric for quantifying inefficiency in socio-technical systems, widely used to guide policies like traffic tolling. Conventional PoA analysis relies on exact numerical costs. However, in many settings, costs represent agents' preferences and may be defined only up to possibly arbitrary scaling and shifting, representing informational and modeling ambiguities. We observe that while such transformations preserve equilibrium and optimal outcomes, they change the PoA value. To resolve this issue, we rely on results from Social Choice Theory and define the Invariant PoA. By connecting admissible transformations to degrees of comparability of agents' costs, we derive the specific social welfare functions which ensure that efficiency evaluations do not depend on arbitrary rescalings or translations of individual costs. Case studies on a toy example and the Zurich network demonstrate that identical tolling strategies can lead to substantially different efficiency estimates depending on the assumed comparability. Our framework thus demonstrates that explicit axiomatic foundations are necessary in order to define efficiency metrics and to appropriately guide policy in large-scale infrastructure design robustly and effectively.


A Very Big Fight Over a Very Small Language

The New Yorker

In the Swiss Alps, a plan to tidy up Romansh--spoken by less than one per cent of the country--set off a decades-long quarrel over identity, belonging, and the sound of authenticity. After reformers launched Rumantsch Grischun, a standardized version of Romansh's various dialects, traditionalists denounced it as a "bastard," a "castrated" tongue, an act of "linguistic murder." Ask him how it all began, and he remembers the ice. It was a bitter morning in January, 1982, when Bernard Cathomas, aged thirty-six, carefully picked his way up a slippery, sloping Zurich street. His destination was No. 33, an ochre house with green shutters--the home of Heinrich Schmid, a linguist at the University of Zurich. Inside, the décor suggested that "professor" was an encompassing identity: old wooden floors, a faded carpet, a living room seemingly untouched since the nineteen-thirties, when Schmid had grown up in the house. Schmid's wife served, a Swiss carrot cake that manages bourgeois indulgence with a vegetable alibi. Cathomas had already written from Chur, in the canton of the Grisons, having recently become the general secretary of the Lia Rumantscha, a small association charged with protecting Switzerland's least known national language, Romansh. Spoken by less than one per cent of the Swiss population, the language was itself splintered into five major "idioms," not always readily intelligible to one another, each with its own spelling conventions. Earlier attempts at unification had collapsed in rivalries. In his letter, Cathomas said that Schmid's authority would be valuable in standardizing the language. Cathomas wrote in German but started and ended in his native Sursilvan, the biggest of the Romansh idioms: " ." Translation: "I thank you very much for your interest and attention to this problem." Schmid, the man he was counting on, hadn't grown up speaking Romansh; he first learned it in high school, and later worked on the "Dicziunari Rumantsch Grischun," a Romansh dictionary begun in 1904 and still lumbering toward completion.


Detecting and Preventing Harmful Behaviors in AI Companions: Development and Evaluation of the SHIELD Supervisory System

Ben-Zion, Ziv, Raffelhüschen, Paul, Zettl, Max, Lüönd, Antonia, Burrer, Achim, Homan, Philipp, Spiller, Tobias R

arXiv.org Artificial Intelligence

AI companions powered by large language models (LLMs) are increasingly integrated into users' daily lives, offering emotional support and companionship. While existing safety systems focus on overt harms, they rarely address early-stage problematic behaviors that can foster unhealthy emotional dynamics, including over-attachment or reinforcement of social isolation. We developed SHIELD (Supervisory Helper for Identifying Emotional Limits and Dynamics), a LLM-based supervisory system with a specific system prompt that detects and mitigates risky emotional patterns before escalation. SHIELD targets five dimensions of concern: (1) emotional over-attachment, (2) consent and boundary violations, (3) ethical roleplay violations, (4) manipulative engagement, and (5) social isolation reinforcement. These dimensions were defined based on media reports, academic literature, existing AI risk frameworks, and clinical expertise in unhealthy relationship dynamics. To evaluate SHIELD, we created a 100-item synthetic conversation benchmark covering all five dimensions of concern. Testing across five prominent LLMs (GPT-4.1, Claude Sonnet 4, Gemma 3 1B, Kimi K2, Llama Scout 4 17B) showed that the baseline rate of concerning content (10-16%) was significantly reduced with SHIELD (to 3-8%), a 50-79% relative reduction, while preserving 95% of appropriate interactions. The system achieved 59% sensitivity and 95% specificity, with adaptable performance via prompt engineering. This proof-of-concept demonstrates that transparent, deployable supervisory systems can address subtle emotional manipulation in AI companions. Most development materials including prompts, code, and evaluation methods are made available as open source materials for research, adaptation, and deployment.


Universality of physical neural networks with multivariate nonlinearity

Savinson, Benjamin, Norris, David J., Mishra, Siddhartha, Lanthaler, Samuel

arXiv.org Artificial Intelligence

The enormous energy demand of artificial intelligence is driving the development of alternative hardware for deep learning. Physical neural networks try to exploit physical systems to perform machine learning more efficiently. In particular, optical systems can calculate with light using negligible energy. While their computational capabilities were long limited by the linearity of optical materials, nonlinear computations have recently been demonstrated through modified input encoding. Despite this breakthrough, our inability to determine if physical neural networks can learn arbitrary relationships between data -- a key requirement for deep learning known as universality -- hinders further progress. Here we present a fundamental theorem that establishes a universality condition for physical neural networks. It provides a powerful mathematical criterion that imposes device constraints, detailing how inputs should be encoded in the tunable parameters of the physical system. Based on this result, we propose a scalable architecture using free-space optics that is provably universal and achieves high accuracy on image classification tasks. Further, by combining the theorem with temporal multiplexing, we present a route to potentially huge effective system sizes in highly practical but poorly scalable on-chip photonic devices. Our theorem and scaling methods apply beyond optical systems and inform the design of a wide class of universal, energy-efficient physical neural networks, justifying further efforts in their development.


Quadruped robot plays badminton with you using AI

FOX News

ANYmal-D combines robotics, artificial intelligence and sports, showing how advanced robots can take on dynamic, fast-paced games. At ETH Zurich's Robotic Systems Lab, engineers have created ANYmal-D, a four-legged robot that can play badminton with people. This project brings together robotics, artificial intelligence and sports, showing how advanced robots can take on dynamic, fast-paced games. ANYmal-D's design and abilities are opening up new possibilities for human-robot collaboration in sports and beyond. Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox.


Researchers secretly experimented on Reddit users with AI-generated comments

Engadget

A group of researchers covertly ran a months-long "unauthorized" experiment in one of Reddit's most popular communities using AI-generated comments to test the persuasiveness of large language models. The experiment, which was revealed over the weekend by moderators of r/changemyview, is described by Reddit mods as "psychological manipulation" of unsuspecting users. "The CMV Mod Team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users," the subreddit's moderators wrote in a lengthy post notifying Redditors about the research. "This experiment deployed AI-generated comments to study how AI could be used to change views." The researchers used LLMs to create comments in response to posts on r/changemyview, a subreddit where Reddit users post (often controversial or provocative) opinions and request debate from other users.


Reddit users were subjected to AI-powered experiment without consent

New Scientist

Reddit users who were unwittingly subjected to an AI-powered experiment have hit back at scientists for conducting research on them without permission – and have sparked a wider debate about such experiments. The social media site Reddit is split into "subreddits" dedicated to a particular community, each with its own volunteer moderators. Members of one subreddit called r/ChangeMyView, because it invites people to discuss potentially contentious issues, were recently informed by the moderators that researchers at the University of Zurich, Switzerland, had been using the site as an online laboratory. The team's experiment seeded more than 1700 comments generated by a variety of large language models (LLMs) into the subreddit, without disclosing they weren't real, to gauge people's reactions. These comments included ones mimicking people who had been raped or pretending to be a trauma counsellor specialising in abuse, among others.


Clinically Ready Magnetic Microrobots for Targeted Therapies

Landers, Fabian C., Hertle, Lukas, Pustovalov, Vitaly, Sivakumaran, Derick, Brinkmann, Oliver, Meiners, Kirstin, Theiler, Pascal, Gantenbein, Valentin, Veciana, Andrea, Mattmann, Michael, Riss, Silas, Gervasoni, Simone, Chautems, Christophe, Ye, Hao, Sevim, Semih, Flouris, Andreas D., Puigmartí-Luis, Josep, Mayor, Tiago Sotto, Alves, Pedro, Lühmann, Tessa, Chen, Xiangzhong, Ochsenbein, Nicole, Moehrlen, Ueli, Gruber, Philipp, Weisskopf, Miriam, Boehler, Quentin, Pané, Salvador, Nelson, Bradley J.

arXiv.org Artificial Intelligence

Systemic drug administration often causes off-target effects limiting the efficacy of advanced therapies. Targeted drug delivery approaches increase local drug concentrations at the diseased site while minimizing systemic drug exposure. We present a magnetically guided microrobotic drug delivery system capable of precise navigation under physiological conditions. This platform integrates a clinical electromagnetic navigation system, a custom-designed release catheter, and a dissolvable capsule for accurate therapeutic delivery. In vitro tests showed precise navigation in human vasculature models, and in vivo experiments confirmed tracking under fluoroscopy and successful navigation in large animal models. The microrobot balances magnetic material concentration, contrast agent loading, and therapeutic drug capacity, enabling effective hosting of therapeutics despite the integration complexity of its components, offering a promising solution for precise targeted drug delivery.


OpenAI Poaches 3 Top Engineers From DeepMind

WIRED

OpenAI announced today it has hired three senior computer vision and machine learning engineers from rival Google DeepMind, all of whom will work in a newly opened OpenAI office in Zurich, Switzerland. OpenAI executives told staff in an internal memo on Tuesday that Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai will be joining the company to work on multimodal AI, artificial intelligence models capable of performing tasks in different mediums ranging from images to audio. OpenAI has long been at the forefront of multimodal AI and released the first version of its text-to-image platform Dall-E in 2021. Its flagship chatbot ChatGPT, however, was initially only capable of interacting with text inputs. The company later added voice and image features as multimodal functionality became an increasingly important part of its product line and AI research.