Goto

Collaborating Authors

 functionality


Reversible, detachable robotic hand redefines dexterity

Robohub

With its opposable thumb, multiple joints and gripping skin, human hands are often considered to be the pinnacle of dexterity, and many robotic hands are designed in their image. But having been shaped by the slow process of evolution, human hands are far from optimized, with the biggest drawbacks including our single, asymmetrical thumbs and attachment to arms with limited mobility. "We can easily see the limitations of the human hand when attempting to reach objects underneath furniture or behind shelves, or performing simultaneous tasks like holding a bottle while picking up a chip can," says Aude Billard, head of the Learning Algorithms and Systems Laboratory (LASA) in EPFL's School of Engineering. "Likewise, accessing objects positioned behind the hand while keeping the grip stable can be extremely challenging, requiring awkward wrist contortions or body repositioning." A team composed of Billard, LASA researcher Xiao Gao, and Kai Junge and Josie Hughes from the Computational Robot Design and Fabrication Lab designed a robotic hand that overcomes these challenges.



Robot, make me a chair

Robohub

"Robot, make me a chair" Computer-aided design (CAD) systems are tried-and-true tools used to design many of the physical objects we use each day. But CAD software requires extensive expertise to master, and many tools incorporate such a high level of detail they don't lend themselves to brainstorming or rapid prototyping. In an effort to make design faster and more accessible for non-experts, researchers from MIT and elsewhere developed an AI-driven robotic assembly system that allows people to build physical objects by simply describing them in words. Their system uses a generative AI model to build a 3D representation of an object's geometry based on the user's prompt. Then, a second generative AI model reasons about the desired object and figures out where different components should go, according to the object's function and geometry.



We're about to simulate a human brain on a supercomputer

New Scientist

We're about to simulate a human brain on a supercomputer The world's most powerful supercomputers can now run simulations of billions of neurons, and researchers hope such models will offer unprecedented insights into how our brains work What would it mean to simulate a human brain? Today's most powerful computing systems now contain enough computational firepower to run simulations of billions of neurons, comparable to the sophistication of real brains. We increasingly understand how these neurons are wired together, too, leading to brain simulations that researchers hope will reveal secrets of brain function that were previously hidden. Researchers have long tried to isolate specific parts of the brain, modelling smaller regions with a computer to explain particular brain functions. But "we have never been able to bring them all together into one place, into one larger brain model where we can check whether these ideas are at all consistent", says Markus Diesmann at the Jülich Research Centre in Germany.


Decoding the Enigma: Benchmarking Humans and AIs on the Many Facets of Working Memory

Neural Information Processing Systems

Working memory (WM), a fundamental cognitive process facilitating the temporary storage, integration, manipulation, and retrieval of information, plays a vital role in reasoning and decision-making tasks. Robust benchmark datasets that capture the multifaceted nature of WM are crucial for the effective development and evaluation of AI WM models. Here, we introduce a comprehensive Working Memory (WorM) benchmark dataset for this purpose. WorM comprises 10 tasks and a total of 1 million trials, assessing 4 functionalities, 3 domains, and 11 behavioral and neural characteristics of WM. We jointly trained and tested state-of-the-art recurrent neural networks and transformers on all these tasks. We also include human behavioral benchmarks as an upper bound for comparison. Our results suggest that AI models replicate some characteristics of WM in the brain, most notably primacy and recency effects, and neural clusters and correlates specialized for different domains and functionalities of WM. In the experiments, we also reveal some limitations in existing models to approximate human behavior. This dataset serves as a valuable resource for communities in cognitive psychology, neuroscience, and AI, offering a standardized framework to compare and enhance WM models, investigate WM's neural underpinnings, and develop WM models with human-like capabilities.


Self Distillation Fine-Tuning of Protein Language Models Improves Versatility in Protein Design

Tavakoli, Amin, Murugan, Raswanth, Gokdemir, Ozan, Ramanathan, Arvind, Arnold, Frances, Anandkumar, Anima

arXiv.org Artificial Intelligence

Supervised fine-tuning (SFT) is a standard approach for adapting large language models to specialized domains, yet its application to protein sequence modeling and protein language models (PLMs) remains ad hoc. This is in part because high-quality annotated data are far more difficult to obtain for proteins than for natural language. We present a simple and general recipe for fast SFT of PLMs, designed to improve the fidelity, reliability, and novelty of generated protein sequences. Unlike existing approaches that require costly precompiled experimental datasets for SFT, our method leverages the PLM itself, integrating a lightweight curation pipeline with domain-specific filters to construct high-quality training data. These filters can independently refine a PLM's output and identify candidates for in vitro evaluation; when combined with SFT, they enable PLMs to generate more stable and functional enzymes, while expanding exploration into protein sequence space beyond natural variants. Although our approach is agnostic to both the choice of protein language model (PLM) and the protein system, we demonstrate its effectiveness with a genome-scale PLM (GenSLM) applied to the tryptophan synthase enzyme family. The supervised fine-tuned model generates sequences that are not only more novel but also display improved characteristics across both targeted design constraints and emergent protein property measures.


Naya Create Review: A Split Keyboard That Just Doesn't Work

WIRED

A beautifully designed split keyboard that seems utterly determined not to work. Strange quirks in setup require extensive troubleshooting. Modules are difficult to use when tented. I really wanted to like the Naya Create. It's as if Apple tried its hand at an ergonomic keyboard.


Beyond Formal Semantics for Capabilities and Skills: Model Context Protocol in Manufacturing

da Silva, Luis Miguel Vieira, Köcher, Aljosha, Gehlhoff, Felix

arXiv.org Artificial Intelligence

Explicit modeling of capabilities and skills -- whether based on ontologies, Asset Administration Shells, or other technologies -- requires considerable manual effort and often results in representations that are not easily accessible to Large Language Models (LLMs). In this work-in-progress paper, we present an alternative approach based on the recently introduced Model Context Protocol (MCP). MCP allows systems to expose functionality through a standardized interface that is directly consumable by LLM-based agents. We conduct a prototypical evaluation on a laboratory-scale manufacturing system, where resource functions are made available via MCP. A general-purpose LLM is then tasked with planning and executing a multi-step process, including constraint handling and the invocation of resource functions via MCP. The results indicate that such an approach can enable flexible industrial automation without relying on explicit semantic models. This work lays the basis for further exploration of external tool integration in LLM-driven production systems.


AI/ML in 3GPP 5G Advanced -- Services and Architecture

Taksande, Pradnya, Kiran, Shwetha, Jha, Pranav, Chaporkar, Prasanna

arXiv.org Artificial Intelligence

Abstract--The 3rd Generation Partnership Project (3GPP), the standards body for mobile networks, is in the final phase of Release 19 standardization and is beginning Release 20. Artificial Intelligence/ Machine Learning (AI/ML) has brought about a paradigm shift in technology and it is being adopted across industries and verticals. This paper focuses on the AI/ML related technological advancements and features introduced in Release 19 within the Service and System Aspects (SA) T echnical specifications group of 3GPP . The advancements relate to two paradigms: (i) enhancements that AI/ML brought to the 5G advanced system (AI for network), e.g. Artificial Intelligence (AI) and Machine Learning (ML) are transforming numerous industries and multiple aspects of modern life. From personalized recommendations on streaming platforms to real-time fraud detection in banking, AI/ML technologies are driving smarter decision-making across industries. In retail, they assist in inventory and supply chain management. In transportation, automotive vehicles rely on ML for object detection and navigation. As data continues to grow, these technologies are evolving rapidly, reshaping how we work, interact, and solve complex problems, making them central to innovation in today's world.