imprint
AI-Driven Development of a Publishing Imprint: Xynapse Traces
Xynapse Traces is an experimental publishing imprint created via a fusion of human and algorithmic methods using a configuration-driven architecture and a multi-model AI integration framework. The system achieved a remarkable 90% reduction in time-to-market (from a typical 6-12 months to just 2-4 weeks), with 80% cost reduction compared to traditional imprint development, while publishing 52 books in its first year and maintaining exceptional quality metrics, including 99% citation accuracy and 100% validation success after initial corrections. Key technical innovations include a continuous ideation pipeline with tournament-style evaluation, a novel codex design for transcriptive meditation practice, comprehensive automation spanning from ideation through production and distribution, and publisher personas that define and guide the imprint's mission. The system also integrates automated verification with human oversight, ensuring that gains in speed do not compromise publishing standards. This effort has significant implications for the future of book publishing, suggesting new paradigms for human-AI collaboration that democratize access to sophisticated publishing capabilities and make previously unviable niche markets accessible.
- Asia > Middle East > UAE (0.14)
- Asia > South Korea > Jeollabuk-do > Jeonju (0.05)
- Asia > South Korea > Seoul > Seoul (0.04)
- (21 more...)
InterKey: Cross-modal Intersection Keypoints for Global Localization on OpenStreetMap
Tran, Nguyen Hoang Khoi, Berrio, Julie Stephany, Shan, Mao, Worrall, Stewart
Reliable global localization is critical for autonomous vehicles, especially in environments where GNSS is degraded or unavailable, such as urban canyons and tunnels. Although high-definition (HD) maps provide accurate priors, the cost of data collection, map construction, and maintenance limits scalability. OpenStreetMap (OSM) offers a free and globally available alternative, but its coarse abstraction poses challenges for matching with sensor data. We propose InterKey, a cross-modal framework that leverages road intersections as distinctive landmarks for global localization. Our method constructs compact binary descriptors by jointly encoding road and building imprints from point clouds and OSM. To bridge modality gaps, we introduce discrepancy mitigation, orientation determination, and area-equalized sampling strategies, enabling robust cross-modal matching. Experiments on the KITTI dataset demonstrate that InterKey achieves state-of-the-art accuracy, outperforming recent baselines by a large margin. The framework generalizes to sensors that can produce dense structural point clouds, offering a scalable and cost-effective solution for robust vehicle localization.
- Transportation > Ground > Road (0.89)
- Transportation > Infrastructure & Services (0.67)
Robust Speech-Workload Estimation for Intelligent Human-Robot Systems
Fortune, Julian, Adams, Julie A., Heard, Jamison
Demanding task environments (e.g., supervising a remotely piloted aircraft) require performing tasks quickly and accurately; however, periods of low and high operator workload can decrease task performance. Intelligent modulation of the system's demands and interaction modality in response to changes in operator workload state may increase performance by avoiding undesirable workload states. This system requires real-time estimation of each workload component (i.e., cognitive, physical, visual, speech, and auditory) to adapt the correct modality. Existing workload systems estimate multiple workload components post-hoc, but few estimate speech workload, or function in real-time. An algorithm to estimate speech workload and mitigate undesirable workload states in real-time is presented. An analysis of the algorithm's accuracy is presented, along with the results demonstrating the algorithm's generalizability across individuals and human-machine teaming paradigms. Real-time speech workload estimation is a crucial element towards developing adaptive human-machine systems.
- North America > United States > Tennessee > Davidson County > Nashville (0.04)
- North America > United States > Hawaii (0.04)
- Europe (0.04)
- Asia > Vietnam > Long An Province > Tân An (0.04)
- Aerospace & Defense > Aircraft (0.66)
- Government > Regional Government > North America Government > United States Government (0.46)
- Health & Medicine > Consumer Health (0.46)
- Health & Medicine > Therapeutic Area (0.46)
From Pixels to Trajectory: Universal Adversarial Example Detection via Temporal Imprints
Gao, Yansong, Peng, Huaibing, Ma, Hua, Dai, Zhiyang, Wang, Shuo, Hu, Hongsheng, Fu, Anmin, Xue, Minhui
For the first time, we unveil discernible temporal (or historical) trajectory imprints resulting from adversarial example (AE) attacks. Standing in contrast to existing studies all focusing on spatial (or static) imprints within the targeted underlying victim models, we present a fresh temporal paradigm for understanding these attacks. Of paramount discovery is that these imprints are encapsulated within a single loss metric, spanning universally across diverse tasks such as classification and regression, and modalities including image, text, and audio. Recognizing the distinct nature of loss between adversarial and clean examples, we exploit this temporal imprint for AE detection by proposing TRAIT (TRaceable Adversarial temporal trajectory ImprinTs). TRAIT operates under minimal assumptions without prior knowledge of attacks, thereby framing the detection challenge as a one-class classification problem. However, detecting AEs is still challenged by significant overlaps between the constructed synthetic losses of adversarial and clean examples due to the absence of ground truth for incoming inputs. TRAIT addresses this challenge by converting the synthetic loss into a spectrum signature, using the technique of Fast Fourier Transform to highlight the discrepancies, drawing inspiration from the temporal nature of the imprints, analogous to time-series signals. Across 12 AE attacks including SMACK (USENIX Sec'2023), TRAIT demonstrates consistent outstanding performance across comprehensively evaluated modalities, tasks, datasets, and model architectures. In all scenarios, TRAIT achieves an AE detection accuracy exceeding 97%, often around 99%, while maintaining a false rejection rate of 1%. TRAIT remains effective under the formulated strong adaptive attacks.
Artificial Intelligence and Deepfakes: The Growing Problem of Fake Porn Images
In San Francisco, meanwhile, a lawsuit is underway against the operators of a number of nudify apps. In some instances, the complaint identifies the defendants by name, but in the case of Clothoff, the accused is only listed as "Doe," the name frequently used in the U.S. for unknown defendants. According to the website's imprint, Clothoff is operated out of the Argentinian capital Buenos Aires. But the company has concealed the true identities of its operators through the use of shell companies and other methods. For a time, operators even sought to mislead the public with a fake image, presumably generated by AI, of the purported head of Clothoff.
- South America > Argentina > Pampas > Buenos Aires F.D. > Buenos Aires (0.26)
- North America > United States > California > San Francisco County > San Francisco (0.26)
- Information Technology > Security & Privacy (0.74)
- Law > Criminal Law (0.58)
- Leisure & Entertainment > Games > Computer Games (0.36)
Turin Shroud does NOT show the face of Jesus, scientist claims - as virtual simulation shows the imprint on the fabric 'could not have been made by a 3D human body'
The face on the Shroud of Turin could not have come from Jesus' head – and it's doubtful he ever touched it, an explosive new study suggests. Marked with a faint impression of a body and face, the artifact is believed by many to be the actual fabric used to wrap Christ's corpse after his crucifixion. But its documented history only starts in the mid-14th century, and it's been a source of scepticism for almost as long, with many dismissing it as a medieval forgery. Now a new study has found that the impression on the shroud could not have been made by a three-dimensional human body, but was perhaps from a bas-relief – a shallow carving. To reach this conclusion, Cicero Moraes, author of the new study, created a virtual simulation in which a fabric was placed over a body in a bid to replicate the famous shroud.
Is this the real face of Jesus? AI unveils image based on the Turin Shroud - as scientists claim to have new evidence the cloth was used to wrap the body of Christ after his crucifixion
Scientists in Italy hit the headlines this week, after claiming the famous Shroud of Turin dates from Jesus' lifetime around 2,000 years ago. Now, AI has reimagined what the son of God might have actually looked like based on the treasured relic, which is said to feature an imprint of Jesus' face. MailOnline asked the AI tool Merlin: 'Can you generate a realistic image of Jesus Christ based on the face in the Shroud of Turin?' The AI-generated result suggests Christ was white with big blue eyes, a trim beard and thorn marks on his face. So, can you see the similarities with the famous holy imprint? The Shroud of Turin is a 14-foot-long linen cloth with a faint image of a crucified man.
- Europe > Italy > Piedmont > Turin Province > Turin (1.00)
- North America > United States > California (0.06)
- Europe > France (0.06)
SEAL: Systematic Error Analysis for Value ALignment
Revel, Manon, Cargnelutti, Matteo, Eloundou, Tyna, Leppert, Greg
Reinforcement Learning from Human Feedback (RLHF) aims to align language models (LMs) with human values by training reward models (RMs) on binary preferences and using these RMs to fine-tune the base LMs. Despite its importance, the internal mechanisms of RLHF remain poorly understood. This paper introduces new metrics to evaluate the effectiveness of modeling and aligning human values, namely feature imprint, alignment resistance and alignment robustness. We categorize alignment datasets into target features (desired values) and spoiler features (undesired concepts). By regressing RM scores against these features, we quantify the extent to which RMs reward them - a metric we term feature imprint. We define alignment resistance as the proportion of the preference dataset where RMs fail to match human preferences, and we assess alignment robustness by analyzing RM responses to perturbed inputs. Our experiments, utilizing open-source components like the Anthropic/hh-rlhf preference dataset and OpenAssistant RMs, reveal significant imprints of target features and a notable sensitivity to spoiler features. We observed a 26% incidence of alignment resistance in portions of the dataset where LM-labelers disagreed with human preferences. Furthermore, we find that misalignment often arises from ambiguous entries within the alignment dataset. These findings underscore the importance of scrutinizing both RMs and alignment datasets for a deeper understanding of value alignment.
- Law (0.67)
- Health & Medicine (0.67)
Grasping, Part Identification, and Pose Refinement in One Shot with a Tactile Gripper
Lim, Joyce Xin-Yan, Pham, Quang-Cuong
The rise in additive manufacturing comes with unique opportunities and challenges. Rapid changes to part design and massive part customization distinctive to 3D-Print (3DP) can be easily achieved. Customized parts that are unique, yet exhibit similar features such as dental moulds, shoe insoles, or engine vanes could be industrially manufactured with 3DP. However, the opportunity for massive part customization comes with unique challenges for the existing production paradigm of robotics applications, as the current robotics paradigm for part identification and pose refinement is repetitive, where data-driven and object-dependent approaches are often used. Thus, a bottleneck exists in robotics applications for 3DP parts where massive customization is involved, as it is difficult for feature-based deep learning approaches to distinguish between similar parts such as shoe insoles belonging to different people. As such, we propose a method that augments patterns on 3DP parts so that grasping, part identification, and pose refinement can be executed in one shot with a tactile gripper. We also experimentally evaluate our approach from three perspectives, including real insertion tasks that mimic robotic sorting and packing, and achieved excellent classification results, a high insertion success rate of 95%, and a sub-millimeter pose refinement accuracy.
- Asia > Singapore (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
Precise Robotic Needle-Threading with Tactile Perception and Reinforcement Learning
Yu, Zhenjun, Xu, Wenqiang, Yao, Siqiong, Ren, Jieji, Tang, Tutian, Li, Yutong, Gu, Guoying, Lu, Cewu
This work presents a novel tactile perception-based method, named T-NT, for performing the needle-threading task, an application of deformable linear object (DLO) manipulation. This task is divided into two main stages: Tail-end Finding and Tail-end Insertion. In the first stage, the agent traces the contour of the thread twice using vision-based tactile sensors mounted on the gripper fingers. The two-run tracing is to locate the tail-end of the thread. In the second stage, it employs a tactile-guided reinforcement learning (RL) model to drive the robot to insert the thread into the target needle eyelet. The RL model is trained in a Unity-based simulated environment. The simulation environment supports tactile rendering which can produce realistic tactile images and thread modeling. During insertion, the position of the poke point and the center of the eyelet are obtained through a pre-trained segmentation model, Grounded-SAM, which predicts the masks for both the needle eye and thread imprints. These positions are then fed into the reinforcement learning model, aiding in a smoother transition to real-world applications. Extensive experiments on real robots are conducted to demonstrate the efficacy of our method. More experiments and videos can be found in the supplementary materials and on the website: https://sites.google.com/view/tac-needlethreading.
- Asia > China > Shanghai > Shanghai (0.05)
- North America > United States (0.04)