penetration
- North America > Puerto Rico > San Juan > San Juan (0.04)
- Europe > Netherlands (0.04)
- Europe > Belgium > Wallonia > Walloon Brabant (0.04)
- (5 more...)
- Energy > Power Industry (1.00)
- Energy > Renewable > Solar (0.47)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > California (0.04)
Control Modes of Teleoperated Surgical Robotic System's Tools in Ophthalmic Surgery
Wang, Haoran, Foroutani, Yasamin, Nepo, Matthew, Rodriguez, Mercedes, Ma, Ji, Hubschman, Jean-Pierre, Tsao, Tsu-Chin, Rosen, Jacob
Abstract--The introduction of a teleoperated surgical robotic system designed for minimally invasive procedures enables the emulation of two distinct control modes through a dedicated input device of the surgical console: (1) Inside Control Mode, which emulates tool manipulation near the distal end (i.e., as if the surgeon was holding the tip of the instrument inside the patient's body), and (2) Outside Control Mode, which emulates manipulation near the proximal end (i.e., as if the surgeon was holding the tool externally). The overarching aim of this reported research is to study and compare the surgeon's performance utilizing these two control modes of operation along with various scaling factors in a simulated vitreoretinal surgical setting. The console of Intraocular Robotic Interventional Surgical System (IRISS) was utilized but the surgical robot itself and the human eye anatomy was simulated by a virtual environment (VR) projected microscope view of an intraocular setup to a VR headset. Five experienced vitreoretinal surgeons and five subjects with no surgical experience used the system to perform fundamental tool/tissue tasks common to vitreoretinal surgery including: (1) touch and reset; (2) grasp and drop; (3) inject; (4) circular tracking. The results indicate that Inside Control outperforms Outside Control across multiple tasks and performance metrics. Higher scaling factors (20 and 30) generally provided better performance, particularly for reducing trajectory errors and tissue damage. This improvement suggests that larger scaling factors enable more precise control, making them the preferred option for fine manipulation tasks. However, task completion time was not consistently reduced across all conditions, indicating that surgeons may need to balance speed and accuracy/precision based on specific surgical requirements. By optimizing control dynamics and user interface, robotic teleoperation has the potential to reduce complications, enhance surgical dexterity, and expand the accessibility of high-precision procedures to a broader range of practitioners. In Minimally Invasive Surgery (MIS), surgical instruments are introduced into the body through small ports established at the skin surface or, in the case of ophthalmic procedures, through specific ocular tissues such as the sclera, cornea, or conjunctiva. Unlike open surgery, where the surgeon may manipulate the tool from any position along its shaft--including proximally or distally--MIS confines the surgeon's interaction to the proximal end of the tool, which remains external to the patient's body, while the distal end performs the intervention through the fixed port.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- Oceania > New Zealand (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Ophthalmology/Optometry (1.00)
- Health & Medicine > Surgery (1.00)
- Health & Medicine > Health Care Technology (1.00)
MOGRAS: Human Motion with Grasping in 3D Scenes
Bhosikar, Kunal, Katageri, Siddharth, Madhavaram, Vivek, Han, Kai, Sharma, Charu
Generating realistic full-body motion interacting with objects is critical for applications in robotics, virtual reality, and human-computer interaction. While existing methods can generate full-body motion within 3D scenes, they often lack the fidelity for fine-grained tasks like object grasping. Conversely, methods that generate precise grasping motions typically ignore the surrounding 3D scene. This gap, generating full-body grasping motions that are physically plausible within a 3D scene, remains a significant challenge. To address this, we introduce MOGRAS (Human MOtion with GRAsping in 3D Scenes), a large-scale dataset that bridges this gap. MOGRAS provides pre-grasping full-body walking motions and final grasping poses within richly annotated 3D indoor scenes. We leverage MOGRAS to benchmark existing full-body grasping methods and demonstrate their limitations in scene-aware generation. Furthermore, we propose a simple yet effective method to adapt existing approaches to work seamlessly within 3D scenes. Through extensive quantitative and qualitative experiments, we validate the effectiveness of our dataset and highlight the significant improvements our proposed method achieves, paving the way for more realistic human-scene interactions.
- North America > United States (0.04)
- Asia > South Korea > Jeollanam-do > Muan (0.04)
- Asia > India > Telangana > Hyderabad (0.04)
- Asia > China > Hong Kong (0.04)
Retargeting Matters: General Motion Retargeting for Humanoid Motion Tracking
Araujo, Joao Pedro, Ze, Yanjie, Xu, Pei, Wu, Jiajun, Liu, C. Karen
Humanoid motion tracking policies are central to building teleoperation pipelines and hierarchical controllers, yet they face a fundamental challenge: the embodiment gap between humans and humanoid robots. Current approaches address this gap by retargeting human motion data to humanoid embodiments and then training reinforcement learning (RL) policies to imitate these reference trajectories. However, artifacts introduced during retargeting, such as foot sliding, self-penetration, and physically infeasible motion are often left in the reference trajectories for the RL policy to correct. While prior work has demonstrated motion tracking abilities, they often require extensive reward engineering and domain randomization to succeed. In this paper, we systematically evaluate how retargeting quality affects policy performance when excessive reward tuning is suppressed. To address issues that we identify with existing retargeting methods, we propose a new retargeting method, General Motion Retargeting (GMR). We evaluate GMR alongside two open-source retargeters, PHC and ProtoMotions, as well as with a high-quality closed-source dataset from Unitree. Using BeyondMimic for policy training, we isolate retargeting effects without reward tuning. Our experiments on a diverse subset of the LAFAN1 dataset reveal that while most motions can be tracked, artifacts in retargeted data significantly reduce policy robustness, particularly for dynamic or long sequences. GMR consistently outperforms existing open-source methods in both tracking performance and faithfulness to the source motion, achieving perceptual fidelity and policy success rates close to the closed-source baseline. Website: https://jaraujo98.github.io/retargeting_matters. Code: https://github.com/YanjieZe/GMR.
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- North America > Puerto Rico > San Juan > San Juan (0.04)
- Europe > Netherlands (0.04)
- Europe > Belgium > Wallonia > Walloon Brabant (0.04)
- (5 more...)
- Energy > Power Industry (1.00)
- Energy > Renewable > Solar (0.47)
Dexonomy: Synthesizing All Dexterous Grasp Types in a Grasp Taxonomy
Chen, Jiayi, Ke, Yubin, Peng, Lin, Wang, He
Generalizable dexterous grasping with suitable grasp types is a fundamental skill for intelligent robots. Developing such skills requires a large-scale and high-quality dataset that covers numerous grasp types (i.e., at least those categorized by the GRASP taxonomy), but collecting such data is extremely challenging. Existing automatic grasp synthesis methods are often limited to specific grasp types or object categories, hindering scalability. This work proposes an efficient pipeline capable of synthesizing contact-rich, penetration-free, and physically plausible grasps for any grasp type, object, and articulated hand. Starting from a single human-annotated template for each hand and grasp type, our pipeline tackles the complicated synthesis problem with two stages: optimize the object to fit the hand template first, and then locally refine the hand to fit the object in simulation. To validate the synthesized grasps, we introduce a contact-aware control strategy that allows the hand to apply the appropriate force at each contact point to the object. Those validated grasps can also be used as new grasp templates to facilitate future synthesis. Experiments show that our method significantly outperforms previous type-unaware grasp synthesis baselines in simulation. Using our algorithm, we construct a dataset containing 10.7k objects and 9.5M grasps, covering 31 grasp types in the GRASP taxonomy. Finally, we train a type-conditional generative model that successfully performs the desired grasp type from single-view object point clouds, achieving an 82.3% success rate in real-world experiments. Project page: https://pku-epic.github.io/Dexonomy.
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > California (0.04)
Guiding Diffusion-Based Articulated Object Generation by Partial Point Cloud Alignment and Physical Plausibility Constraints
Kreber, Jens U., Stueckler, Joerg
Articulated objects are an important type of interactable objects in everyday environments. In this paper, we propose PhysNAP, a novel diffusion model-based approach for generating articulated objects that aligns them with partial point clouds and improves their physical plausibility. The model represents part shapes by signed distance functions (SDFs). W e guide the reverse diffusion process using a point cloud alignment loss computed using the predicted SDFs. Additionally, we impose non-penetration and mobility constraints based on the part SDFs for guiding the model to generate more physically plausible objects. W e also make our diffusion approach category-aware to further improve point cloud alignment if category information is available. W e evaluate the generative ability and constraint consistency of samples generated with PhysNAP using the PartNet-Mobility dataset. W e also compare it with an unguided baseline diffusion model and demonstrate that PhysNAP can improve constraint consistency and provides a tradeoff with generative ability.
Impacts between multibody systems and deformable structures
The final target point of the presented research leads us to bio - inspired mobile robots, especially those able to reconstruct the natural mobility of gibbons. The principal mode of their locomotion is called brachiation. It consists of swinging from branch to branch for distances of up to 15 m and at speeds up to 50 km/h (Figure 1). We may address the readers to several brachiation techniques and constructions presented in the technical literature [1 - 5]. Seeing several similarities, we may classify the brachi ation robots as a branch of the walking ones (Fig.1a). Each research on the brachiation dynamics is challenging, mainly because of their multitasking: the system's number of degrees of freedom varies during the motion (i.e., we need model a nonlinear time - varying system), unilateral constraints are present (i.e., impact forces can appear) at selected stages of their locomotion, the investigated systems are kinematically or dynamically overactuated.
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.05)
- Europe > Italy > Lombardy > Milan (0.05)
- Europe > Poland > Pomerania Province > Gdańsk (0.04)
- (7 more...)