phantom
Robotic Ultrasound-Guided Femoral Artery Reconstruction of Anatomically-Representative Phantoms
Al-Zogbi, Lidia, Raina, Deepak, Pandian, Vinciya, Fleiter, Thorsten, Krieger, Axel
Femoral artery access is essential for numerous clinical procedures, including diagnostic angiography, therapeutic catheterization, and emergency interventions. Despite its critical role, successful vascular access remains challenging due to anatomical variability, overlying adipose tissue, and the need for precise ultrasound (US) guidance. Errors in needle placement can lead to severe complications, restricting the procedure to highly skilled clinicians in controlled hospital settings. While robotic systems have shown promise in addressing these challenges through autonomous scanning and vessel reconstruction, clinical translation remains limited due to reliance on simplified phantom models that fail to capture human anatomical complexity. In this work, we present a method for autonomous robotic US scanning of bifurcated femoral arteries, and validate it on five vascular phantoms created from real patient computed tomography (CT) data. Additionally, we introduce a video-based deep learning US segmentation network tailored for vascular imaging, enabling improved 3D arterial reconstruction. The proposed network achieves a Dice score of 89.21% and an Intersection over Union of 80.54% on a newly developed vascular dataset. The quality of the reconstructed artery centerline is evaluated against ground truth CT data, demonstrating an average L2 deviation of 0.91+/-0.70 mm, with an average Hausdorff distance of 4.36+/-1.11mm. This study is the first to validate an autonomous robotic system for US scanning of the femoral artery on a diverse set of patient-specific phantoms, introducing a more advanced framework for evaluating robotic performance in vascular imaging and intervention.
- North America > United States > Maryland > Baltimore (0.14)
- North America > United States > Pennsylvania (0.14)
- Europe > France (0.14)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Phantom: Subject-consistent video generation via cross-modal alignment
Liu, Lijie, Ma, Tianxiang, Li, Bingchuan, Chen, Zhuowei, Liu, Jiawei, He, Qian, Wu, Xinglong
The continuous development of foundational models for video generation is evolving into various applications, with subject-consistent video generation still in the exploratory stage. We refer to this as Subject-to-Video, which extracts subject elements from reference images and generates subject-consistent video through textual instructions. We believe that the essence of subject-to-video lies in balancing the dual-modal prompts of text and image, thereby deeply and simultaneously aligning both text and visual content. To this end, we propose Phantom, a unified video generation framework for both single and multi-subject references. Building on existing text-to-video and image-to-video architectures, we redesign the joint text-image injection model and drive it to learn cross-modal alignment via text-image-video triplet data. In particular, we emphasize subject consistency in human generation, covering existing ID-preserving video generation while offering enhanced advantages. The project homepage is here https://phantom-video.github.io/Phantom/.
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.70)
Multi-Stage Fusion Architecture for Small-Drone Localization and Identification Using Passive RF and EO Imagery: A Case Study
Wewelwala, Thakshila Wimalajeewa, Tedesso, Thomas W., Davis, Tony
Reliable detection, localization and identification of small drones is essential to promote safe, secure and privacy-respecting operation of Unmanned-Aerial Systems (UAS), or simply, drones. This is an increasingly challenging problem with only single modality sensing, especially, to detect and identify small drones. In this work, a multi-stage fusion architecture using passive radio frequency (RF) and electro-optic (EO) imagery data is developed to leverage the synergies of the modalities to improve the overall tracking and classification capabilities. For detection with EO-imagery, supervised deep learning based techniques as well as unsupervised foreground/background separation techniques are explored to cope with challenging environments. Using real collected data for Group 1 and 2 drones, the capability of each algorithm is quantified. In order to compensate for any performance gaps in detection with only EO imagery as well as to provide a unique device identifier for the drones, passive RF is integrated with EO imagery whenever available. In particular, drone detections in the image plane are combined with passive RF location estimates via detection-to-detection association after 3D to 2D transformation. Final tracking is performed on the composite detections in the 2D image plane. Each track centroid is given a unique identification obtained via RF fingerprinting. The proposed fusion architecture is tested and the tracking and performance is quantified over the range to illustrate the effectiveness of the proposed approaches using simultaneously collected passive RF and EO data at the Air Force Research Laboratory (AFRL) through ESCAPE-21 (Experiments, Scenarios, Concept of Operations, and Prototype Engineering) data collect
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > United States > New York > Onondaga County > Syracuse (0.04)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- (8 more...)
- Information Technology (1.00)
- Aerospace & Defense (1.00)
- Transportation (0.87)
- (2 more...)
A Two-Dimensional Deep Network for RF-based Drone Detection and Identification Towards Secure Coverage Extension
Zhao, Zixiao, Du, Qinghe, Yao, Xiang, Lu, Lei, Zhang, Shijiao
As drones become increasingly prevalent in human life, they also raises security concerns such as unauthorized access and control, as well as collisions and interference with manned aircraft. Therefore, ensuring the ability to accurately detect and identify between different drones holds significant implications for coverage extension. Assisted by machine learning, radio frequency (RF) detection can recognize the type and flight mode of drones based on the sampled drone signals. In this paper, we first utilize Short-Time Fourier. Transform (STFT) to extract two-dimensional features from the raw signals, which contain both time-domain and frequency-domain information. Then, we employ a Convolutional Neural Network (CNN) built with ResNet structure to achieve multi-class classifications. Our experimental results show that the proposed ResNet-STFT can achieve higher accuracy and faster convergence on the extended dataset. Additionally, it exhibits balanced performance compared to other baselines on the raw dataset.
- Asia > Mongolia (0.14)
- Asia > China > Shaanxi Province > Xi'an (0.04)
- North America > Canada > Quebec (0.04)
- (3 more...)
- Information Technology > Security & Privacy (1.00)
- Media (0.88)
Phantom -- A RL-driven multi-agent framework to model complex systems
Ardon, Leo, Vann, Jared, Garg, Deepeka, Spooner, Tom, Ganesh, Sumitra
Agent based modelling (ABM) is a computational approach to modelling complex systems by specifying the behaviour of autonomous decision-making components or agents in the system and allowing the system dynamics to emerge from their interactions. Recent advances in the field of Multi-agent reinforcement learning (MARL) have made it feasible to study the equilibrium of complex environments where multiple agents learn simultaneously. However, most ABM frameworks are not RL-native, in that they do not offer concepts and interfaces that are compatible with the use of MARL to learn agent behaviours. In this paper, we introduce a new open-source framework, Phantom, to bridge the gap between ABM and MARL. Phantom is an RL-driven framework for agent-based modelling of complex multi-agent systems including, but not limited to economic systems and markets. The framework aims to provide the tools to simplify the ABM specification in a MARL-compatible way - including features to encode dynamic partial observability, agent utility functions, heterogeneity in agent preferences or types, and constraints on the order in which agents can act (e.g. Stackelberg games, or more complex turn-taking environments). In this paper, we present these features, their design rationale and present two new environments leveraging the framework.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Switzerland (0.04)
This fragrance uses AI technology to make you feel horny
The tech world – as evidenced by billionaires taking 10 minute holidays to space, and that tiny little car that delivered the football onto the pitch during the Euros – is more advanced than ever before. Even the beauty industry is becoming more technologically minded, with the announcement of the world's first ever "connected fragrance" from Paco Rabanne. Released today, Phantom, a new perfume with an appropriately robot-shaped body, is a world-first from the luxury brand, using artificial intelligence to create a state of the art "Augmented Creativity" process. What that actually means is that the perfume works with the neuroscience of your scent receptors to change how you feel as well as how you smell. The team at Paco Rabanne developed a decidedly Black Mirror-sounding Science of Wellness programme for the release.
- Information Technology > Artificial Intelligence > Robots (0.40)
- Information Technology > Communications > Social Media (0.34)
Phantom by Paco Rabanne: Artificial Intelligence + Human Emotion = Augmented Creativity Fragrance News Fragrantica
Following the big launch of the latest pillar by Paco Rabanne, we received more interesting info about the creation, inspiration, and futuristic direction of the newest fragrance PHANTOM. Paco Rabanne's team developed PHANTOM with the perfumers, scientists, and technicians of the International Flavors and Fragrances (IFF) company, using the company's state-of-the-art Augmented Creativity process. With PHANTOM, every aspect of perfume creation has been reinvented by next-generation technologies developed for IFF. Thanks to neurosciences, algorithmic tools, and artificial intelligence, our perfumers have been able to push back their creative boundaries. How do you use neurosciences in perfumery?
- Health & Medicine > Therapeutic Area > Neurology (0.58)
- Materials > Chemicals (0.39)
Paco Rabanne's latest fragrance has NFC, for some reason
What does the future smell like? That depends on who you ask. PUIG's perfumiers, who produce scents for Paco Rabanne, believe that the future smells sexy, confident and energetic. That's how they're choosing to market Phantom, the fashion house's latest fragrance-cum-piece of retro-futurist art. Phantom comes in a robot-shaped bottle that, when you tap your phone on the NFC tag embedded into its head, welcomes you into its own digital world.
Split-Second 'Phantom' Images Can Fool Tesla's Autopilot
Safety concerns over automated driver-assistance systems like Tesla's usually focus on what the car can't see, like the white side of a truck that one Tesla confused with a bright sky in 2016, leading to the death of a driver. But one group of researchers has been focused on what autonomous driving systems might see that a human driver doesn't--including "phantom" objects and signs that aren't really there, which could wreak havoc on the road. Researchers at Israel's Ben Gurion University of the Negev have spent the last two years experimenting with those "phantom" images to trick semi-autonomous driving systems. They previously revealed that they could use split-second light projections on roads to successfully trick Tesla's driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they've found they can pull off the same trick with just a few frames of a road sign injected on a billboard's video. And they warn that if hackers hijacked an internet-connected billboard to carry out the trick, it could be used to cause traffic jams or even road accidents while leaving little evidence behind.
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks > Manufacturer (1.00)
- Transportation > Passenger (0.88)
- Transportation > Electric Vehicle (0.88)
Belkin SoundForm Elite Hi-Fi smart speaker review: The case of the missing midrange
My thoughts about the Belkin SoundForm Elite Hi-Fi Smart Speaker Wireless Charging can be distilled in a single word: boring. Listening to a $300 speaker should be exciting. Belkin doesn't have a track record of building great audio equipment, but its partner on this project--the French audiophile company Devialet--most certainly does. The Devialet Phantom blew my mind when I reviewed it five years ago. So, I had high hopes when I learned Belkin had enlisted that company's expertise to develop something more mainstream.