Goto

Collaborating Authors

 strand


Doubly Hierarchical Geometric Representations for Strand-based Human Hairstyle Generation

Neural Information Processing Systems

We introduce a doubly hierarchical generative representation for strand-based 3D hairstyle geometry that progresses from coarse, low-pass filtered guide hair to densely populated hair strands rich in high-frequency details. We employ the Discrete Cosine Transform (DCT) to separate low-frequency structural curves from high-frequency curliness and noise, avoiding the Gibbs' oscillation issues associated with the standard Fourier transform in open curves. Unlike the guide hair sampled from the scalp UV map grids which may lose capturing details of the hairstyle in existing methods, our method samples optimal sparse guide strands by utilising $k$-medoids clustering centres from low-pass filtered dense strands, which more accurately retain the hairstyle's inherent characteristics. The proposed variational autoencoder-based generation network, with an architecture inspired by geometric deep learning and implicit neural representations, facilitates flexible, off-the-grid guide strand modelling and enables the completion of dense strands in any quantity and density, drawing on principles from implicit neural representations. Empirical evaluations confirm the capacity of the model to generate convincing guide hair and dense strands, complete with nuanced high-frequency details.


How scientists analyze ancient DNA from old bones

Popular Science

Centuries-old genetic material can solve historical mysteries, from lost species to what killed Napoleon's army. A glowing, digital double helix represents the billions of base pairs scientists analyze when sequencing ancient DNA. Breakthroughs, discoveries, and DIY tips sent every weekday. In 1976, workers excavating a tunnel for the Toronto subway system came across some very old bones. Using radiocarbon dating, researchers determined the partial cranium and fragments of antlers were roughly 12,000 years old.








Supplementary Material for CableInspect AD An Expert Annotated Anomaly Detection

Neural Information Processing Systems

For more information, please refer to the Distribution and Maintenance subsections of the datasheet provided in J. The annotations are in the COCO format. We provide detailed explanations on how the dataset can be read in the code repository.


DYMO-Hair: Generalizable Volumetric Dynamics Modeling for Robot Hair Manipulation

Zhao, Chengyang, Yoo, Uksang, Chaudhury, Arkadeep Narayan, Nam, Giljoo, Francis, Jonathan, Ichnowski, Jeffrey, Oh, Jean

arXiv.org Artificial Intelligence

Abstract-- Hair care is an essential daily activity, yet it remains inaccessible to individuals with limited mobility and challenging for autonomous robot systems due to the fine-grained physical structure and complex dynamics of hair . We introduce a novel dynamics learning paradigm that is suited for volumetric quantities such as hair, relying on an action-conditioned latent state editing mechanism, coupled with a compact 3D latent space of diverse hairstyles to improve generalizability. This latent space is pre-trained at scale using a novel hair physics simulator, enabling generalization across previously unseen hairstyles. Experiments in simulation demonstrate that DYMO-Hair's dynamics model outperforms baselines on capturing local deformation for diverse, unseen hairstyles. DYMO-Hair further outperforms baselines in closed-loop hair styling tasks on unseen hairstyles, with an average of 22% lower final geometric error and 42% higher success rate than the state-of-the-art system. Real-world experiments exhibit zero-shot transferability of our system to wigs, achieving consistent success on challenging unseen hairstyles where the state-of-the-art system fails. T ogether, these results introduce a foundation for model-based robot hair care, advancing toward more generalizable, flexible, and accessible robot hair styling in unconstrained physical environments. Hair is central to personal identity and self-esteem [1], [2], yet routine care is difficult for individuals with limited mobility due to reduced coordination, strength, and flexibility [3]. To improve accessibility and autonomy, robot hair care systems have been explored [4]-[7], but existing approaches rely on either handcrafted trajectories or rule-based controllers, restricting generalization across diverse hairstyles and goals. To address these limitations, we propose DYMO-Hair, a model-based robot hair care system. Our system is capable of generalizable and flexible visual goal-conditioned hair manipulation, across diverse hairstyles and objectives in unconstrained physical environments. Chengyang Zhao, Uksang Y oo, Jonathan Francis (by courtesy), Jeffrey Ichnowski, and Jean Oh are with Robotics Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA. Arkadeep Narayan Chaudhury is with Epic Games, Inc., Pittsburgh, Pennsylvania, USA. Giljoo Nam is with Meta Codec Avatars Lab, Pittsburgh, Pennsylvania, USA. Jonathan Francis is with Bosch Center for Artificial Intelligence, Pittsburgh, Pennsylvania, USA. Figure 1. We introduce DYMO-Hair, a unified, model-based robot hair care system.