pipette
DeepXPalm: Tilt and Position Rendering using Palm-worn Haptic Display and CNN-based Tactile Pattern Recognition
Miguel, Altamirano Cabrera, Oleg, Sautenkov, Jonathan, Tirado, Aleksey, Fedoseev, Pavel, Kopanev, Hiroyuki, Kajimoto, Dzmitry, Tsetserukou
Telemanipulation of deformable objects requires high precision and dexterity from the users, which can be increased by kinesthetic and tactile feedback. However, the object shape can change dynamically, causing ambiguous perception of its alignment and hence errors in the robot positioning. Therefore, the tilt angle and position classification problem has to be solved to present a clear tactile pattern to the user. This work presents a telemanipulation system for plastic pipettes consisting of a multi-contact haptic device LinkGlide to deliver haptic feedback at the users' palm and two tactile sensors array embedded in the 2-finger Robotiq gripper. We propose a novel approach based on Convolutional Neural Networks (CNN) to detect the tilt and position while grasping deformable objects. The CNN generates a mask based on recognized tilt and position data to render further multi-contact tactile stimuli provided to the user during the telemanipulation. The study has shown that using the CNN algorithm and the preset mask, tilt, and position recognition by users is increased from 9.67% using the direct data to 82.5%.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Costa Rica > Heredia Province > Heredia (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.48)
RoboCulture: A Robotics Platform for Automated Biological Experimentation
Angers, Kevin, Darvish, Kourosh, Yoshikawa, Naruki, Okhovatian, Sargol, Bannerman, Dawn, Yakavets, Ilya, Shkurti, Florian, Aspuru-Guzik, Alán, Radisic, Milica
Automating biological experimentation remains challenging due to the need for millimeter-scale precision, long and multi-step experiments, and the dynamic nature of living systems. Current liquid handlers only partially automate workflows, requiring human intervention for plate loading, tip replacement, and calibration. Industrial solutions offer more automation but are costly and lack the flexibility needed in research settings. Meanwhile, research in autonomous robotics has yet to bridge the gap for long-duration, failure-sensitive biological experiments. We introduce RoboCulture, a cost-effective and flexible platform that uses a general-purpose robotic manipulator to automate key biological tasks. RoboCulture performs liquid handling, interacts with lab equipment, and leverages computer vision for real-time decisions using optical density-based growth monitoring. We demonstrate a fully autonomous 15-hour yeast culture experiment where RoboCulture uses vision and force feedback and a modular behavior tree framework to robustly execute, monitor, and manage experiments. Video demonstrations of RoboCulture can be found at https://ac-rad.github.io/roboculture.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States (0.04)
- Europe > Spain > Galicia > Madrid (0.04)
- Asia > Japan > Honshū > Kantō > Kanagawa Prefecture > Yokohama (0.04)
Coarse-to-Fine Learning for Multi-Pipette Localisation in Robot-Assisted In Vivo Patch-Clamp
Wei, Lan, Gonzalez, Gema Vera, Kgwarae, Phatsimo, Timms, Alexander, Zahorovsky, Denis, Schultz, Simon, Zhang, Dandan
In vivo image-guided multi-pipette patch-clamp is essential for studying cellular interactions and network dynamics in neuroscience. However, current procedures mainly rely on manual expertise, which limits accessibility and scalability. Robotic automation presents a promising solution, but achieving precise real-time detection of multiple pipettes remains a challenge. Existing methods focus on ex vivo experiments or single pipette use, making them inadequate for in vivo multi-pipette scenarios. To address these challenges, we propose a heatmap-augmented coarse-to-fine learning technique to facilitate multi-pipette real-time localisation for robot-assisted in vivo patch-clamp. More specifically, we introduce a Generative Adversarial Network (GAN)-based module to remove background noise and enhance pipette visibility. We then introduce a two-stage Transformer model that starts with predicting the coarse heatmap of the pipette tips, followed by the fine-grained coordination regression module for precise tip localisation. To ensure robust training, we use the Hungarian algorithm for optimal matching between the predicted and actual locations of tips. Experimental results demonstrate that our method achieved > 98% accuracy within 10 {\mu}m, and > 89% accuracy within 5 {\mu}m for the localisation of multi-pipette tips. The average MSE is 2.52 {\mu}m.
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Italy (0.04)
- Asia > China (0.04)
LucidGrasp: Robotic Framework for Autonomous Manipulation of Laboratory Equipment with Different Degrees of Transparency via 6D Pose Estimation
Makarova, Maria, Trinitatova, Daria, Liu, Qian, Tsetserukou, Dzmitry
Many modern robotic systems operate autonomously, however they often lack the ability to accurately analyze the environment and adapt to changing external conditions, while teleoperation systems often require special operator skills. In the field of laboratory automation, the number of automated processes is growing, however such systems are usually developed to perform specific tasks. In addition, many of the objects used in this field are transparent, making it difficult to analyze them using visual channels. The contributions of this work include the development of a robotic framework with autonomous mode for manipulating liquid-filled objects with different degrees of transparency in complex pose combinations. The conducted experiments demonstrated the robustness of the designed visual perception system to accurately estimate object poses for autonomous manipulation, and confirmed the performance of the algorithms in dexterous operations such as liquid dispensing. The proposed robotic framework can be applied for laboratory automation, since it allows solving the problem of performing non-trivial manipulation tasks with the analysis of object poses of varying degrees of transparency and liquid levels, requiring high accuracy and repeatability.
- Europe > Spain > Galicia > Madrid (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Asia > Russia (0.04)
- (2 more...)
TiltXter: CNN-based Electro-tactile Rendering of Tilt Angle for Telemanipulation of Pasteur Pipettes
Cabrera, Miguel Altamirano, Tirado, Jonathan, Fedoseev, Aleksey, Sautenkov, Oleg, Poliakov, Vladimir, Kopanev, Pavel, Tsetserukou, Dzmitry
The shape of deformable objects can change drastically during grasping by robotic grippers, causing an ambiguous perception of their alignment and hence resulting in errors in robot positioning and telemanipulation. Rendering clear tactile patterns is fundamental to increasing users' precision and dexterity through tactile haptic feedback during telemanipulation. Therefore, different methods have to be studied to decode the sensors' data into haptic stimuli. This work presents a telemanipulation system for plastic pipettes that consists of a Force Dimension Omega.7 haptic interface endowed with two electro-stimulation arrays and two tactile sensor arrays embedded in the 2-finger Robotiq gripper. We propose a novel approach based on convolutional neural networks (CNN) to detect the tilt of deformable objects. The CNN generates a tactile pattern based on recognized tilt data to render further electro-tactile stimuli provided to the user during the telemanipulation. The study has shown that using the CNN algorithm, tilt recognition by users increased from 23.13\% with the downsized data to 57.9%, and the success rate during teleoperation increased from 53.12% using the downsized data to 92.18% using the tactile patterns generated by the CNN.
- North America > United States > Rhode Island > Newport County > Newport (0.04)
- North America > Costa Rica > Heredia Province > Heredia (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- (2 more...)
- Research Report > New Finding (0.96)
- Research Report > Experimental Study (0.71)
- Government > Regional Government (0.47)
- Health & Medicine (0.46)
Pipette: Automatic Fine-grained Large Language Model Training Configurator for Real-World Clusters
Yim, Jinkyu, Song, Jaeyong, Choi, Yerim, Lee, Jaebeen, Jung, Jaewon, Jang, Hongsun, Lee, Jinho
Training large language models (LLMs) is known to be challenging because of the huge computational and memory capacity requirements. To address these issues, it is common to use a cluster of GPUs with 3D parallelism, which splits a model along the data batch, pipeline stage, and intra-layer tensor dimensions. However, the use of 3D parallelism produces the additional challenge of finding the optimal number of ways on each dimension and mapping the split models onto the GPUs. Several previous studies have attempted to automatically find the optimal configuration, but many of these lacked several important aspects. For instance, the heterogeneous nature of the interconnect speeds is often ignored. While the peak bandwidths for the interconnects are usually made equal, the actual attained bandwidth varies per link in real-world clusters. Combined with the critical path modeling that does not properly consider the communication, they easily fall into sub-optimal configurations. In addition, they often fail to consider the memory requirement per GPU, often recommending solutions that could not be executed. To address these challenges, we propose Pipette, which is an automatic fine-grained LLM training configurator for real-world clusters. By devising better performance models along with the memory estimator and fine-grained individual GPU assignment, Pipette achieves faster configurations that satisfy the memory constraints. We evaluated Pipette on large clusters to show that it provides a significant speedup over the prior art. The implementation of Pipette is available at https://github.com/yimjinkyu1/date2024_pipette.
- North America > United States > California > Alameda County > Livermore (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
Exploring the Role of Electro-Tactile and Kinesthetic Feedback in Telemanipulation Task
Trinitatova, Daria, Cabrera, Miguel Altamirano, Ponomareva, Polina, Fedoseev, Aleksey, Tsetserukou, Dzmitry
Teleoperation of robotic systems for precise and delicate object grasping requires high-fidelity haptic feedback to obtain comprehensive real-time information about the grasp. In such cases, the most common approach is to use kinesthetic feedback. However, a single contact point information is insufficient to detect the dynamically changing shape of soft objects. This paper proposes a novel telemanipulation system that provides kinesthetic and cutaneous stimuli to the user's hand to achieve accurate liquid dispensing by dexterously manipulating the deformable object (i.e., pipette). The experimental results revealed that the proposed approach to provide the user with multimodal haptic feedback considerably improves the quality of dosing with a remote pipette. Compared with pure visual feedback, the relative dosing error decreased by 66\% and task execution time decreased by 18\% when users manipulated the deformable pipette with a multimodal haptic interface in combination with visual feedback. The proposed technology can be potentially implemented in delicate dosing procedures during the antibody tests for COVID-19, chemical experiments, operation with organic materials, and telesurgery.
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Asia > Russia (0.04)
In The Lab Of The Future, Robots Run Experiments While Scientists Sleep
Ben Miles may be one of the last scientists to handle a pipette. While completing a chemical biology Ph.D. in London two years ago, he read about a California-based company called Transcriptic that had a robotic cloud laboratory. So he signed up and wrote code to run synthetic biology experiments remotely from a coffee shop. He even traveled -- including a trip to Vienna -- while monitoring experiments on a laptop. "I would wake up the next morning and had my results," he says.
- North America > United States > California (0.27)
- Europe > Austria > Vienna (0.27)
Robot reveals the inner workings of brain cells: Automated way to record electrical activity inside neurons in the living brain
But that could soon change: Researchers at MIT and the Georgia Institute of Technology have developed a way to automate the process of finding and recording information from neurons in the living brain. The researchers have shown that a robotic arm guided by a cell-detecting computer algorithm can identify and record from neurons in the living mouse brain with better accuracy and speed than a human experimenter. The new automated process eliminates the need for months of training and provides long-sought information about living cells' activities. Using this technique, scientists could classify the thousands of different types of cells in the brain, map how they connect to each other, and figure out how diseased cells differ from normal cells. The project is a collaboration between the labs of Ed Boyden, associate professor of biological engineering and brain and cognitive sciences at MIT, and Craig Forest, an assistant professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech.
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
Robots record brain activity inside neurons
Clamping an electrode to the brain cell of a living animal to record its electrical chatter is a task that demands finesse and patience. Known as'whole-cell patch-clamping', it is reputedly the "finest art in neuroscience", says neurobiologist Edward Boyden, and one that only a few dozen laboratories around the world specialize in. But researchers are trying to demystify this art by turning it into a streamlined, automated technique that any laboratory could attempt, using robotics and downloadable source code. "Patch-clamping provides a unique view into neural circuits, and it's a very exciting technique but is really underused," says neuroscientist Karel Svoboda at the Howard Hughes Medical Institute's Janelia Research Campus in Ashburn, Virginia. "That's why automation is a really, really exciting direction."
- North America > United States > Virginia > Loudoun County > Ashburn (0.25)
- North America > United States > Washington > King County > Seattle (0.05)
- North America > United States > Texas > Travis County > Austin (0.05)
- (2 more...)