statue
Two police officers killed in explosion in Moscow
Three people - including two police officers - have been killed in an explosion in Moscow, Russian authorities have said. Two traffic police officers saw a suspicious individual near a police car on the city's Yeletskaya Street, and when they approached the suspect to detain him, an explosive device was detonated, Russia's Investigative Committee has said. The two police officers died from their injuries, along with another individual who was standing nearby. The attack comes two days after a senior Russian general was killed in a car bombing in the capital on Monday. Lt Gen Fanil Sarvarov died after an explosive device - which had been planted under a car - was detonated.
- Asia > Russia (1.00)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.34)
- North America > United States (0.17)
- (16 more...)
AdLift: Lifting Adversarial Perturbations to Safeguard 3D Gaussian Splatting Assets Against Instruction-Driven Editing
Hong, Ziming, Huang, Tianyu, Chen, Runnan, Ye, Shanshan, Gong, Mingming, Han, Bo, Liu, Tongliang
Recent studies have extended diffusion-based instruction-driven 2D image editing pipelines to 3D Gaussian Splatting (3DGS), enabling faithful manipulation of 3DGS assets and greatly advancing 3DGS content creation. However, it also exposes these assets to serious risks of unauthorized editing and malicious tampering. Although imperceptible adversarial perturbations against diffusion models have proven effective for protecting 2D images, applying them to 3DGS encounters two major challenges: view-generalizable protection and balancing invisibility with protection capability. In this work, we propose the first editing safeguard for 3DGS, termed AdLift, which prevents instruction-driven editing across arbitrary views and dimensions by lifting strictly bounded 2D adversarial perturbations into 3D Gaussian-represented safeguard. To ensure both adversarial perturbations effectiveness and invisibility, these safeguard Gaussians are progressively optimized across training views using a tailored Lifted PGD, which first conducts gradient truncation during back-propagation from the editing model at the rendered image and applies projected gradients to strictly constrain the image-level perturbation. Then, the resulting perturbation is backpropagated to the safeguard Gaussian parameters via an image-to-Gaussian fitting operation. We alternate between gradient truncation and image-to-Gaussian fitting, yielding consistent adversarial-based protection performance across different viewpoints and generalizes to novel views. Empirically, qualitative and quantitative results demonstrate that AdLift effectively protects against state-of-the-art instruction-driven 2D image and 3DGS editing.
- Education > Curriculum > Subject-Specific Education (0.54)
- Information Technology > Security & Privacy (0.46)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Sensing and Signal Processing > Image Processing (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.67)
Easter Island mystery is SOLVED: Scientists finally pinpoint who built the iconic stone heads 900 years ago
Karoline Leavitt's family member was swarmed by ICE agents while picking up son from school as child's father tell her to'self deport' Deaths from highly infectious virus are growing... as states brace for widespread outbreaks My book on the Kennedys was used as a'mistress manual' by Olivia Nuzzi... then this wannabe Carolyn Bessette had the nerve to hound me with these outrageous texts: MAUREEN CALLAHAN Katy Perry's legal victory as judge orders disabled veteran to pay singer nearly $2m over Montecito mansion Trump reveals next DC renovation project to remove'Biden filth' after White House ballroom Cracker Barrel CEO whines that she got'fired by America' for woke redesign Kroger employee reveals shocking amount laundry products have increased by... 'biggest price jump I've seen in a single week' Hollywood heir, 23, whose mom Anne Heche died in horror car fireball has secret LOVE CHILD with 43-year-old... now she's telling all Missing Melodee Buzzard's mom'left her daughter with strangers she met at the zoo' Rachel Zoe reveals why she dumped husband of 26 years... and if she has started dating again Horrific moment cops found body of Cowboys star Marshawn Kneeland after he shot himself at end of 145 mph chase'This is pretty lurid' Jenny McCarthy, 53, reveals health emergency that involved NINE surgeries, her'teeth falling out' and'growth' on her eyeballs Maryland grandma, 58, dragged across floor after being deported to country she'has never even visited' READ MORE: New'stone head' statue mysteriously appears on Easter Island One of the biggest mysteries surrounding Easter Island may finally be solved - as scientists pinpoint who built the iconic stone heads over 900 years ago. In the past, researchers assumed that the 12 to 80-ton statues would have required the combined efforts of hundreds of labourers to build and move. However, new archaeological evidence shows that the statues, known as moai, were not carved by a single powerful chiefdom. Instead, each moai was carved by a small clan or by an individual family, with as few as four to six people working on a single statue. Using a new 3D model of the island's main moai quarry, which you can explore below, archaeologists identified 30 unique'workshops' where the statues were produced.
- North America > United States > Maryland (0.24)
- North America > Canada > Alberta (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- (13 more...)
- Personal > Obituary (0.46)
- Research Report > New Finding (0.34)
- Media > Television (1.00)
- Media > Music (1.00)
- Media > Film (1.00)
- (6 more...)
"Monuments," Reviewed: The Confederacy Surrenders to a Truer American Past
As the Trump Administration tries to rescue symbols of the Lost Cause, an exhibition in Los Angeles, led by Kara Walker, finds meaning in their desecration. Kara Walker's "Unmanned Drone" (2023) transforms a Stonewall Jackson statue. The first thing you see is a horse's ass, protruding, upside down, from the thorax of a monster. A man's arm descends from the beast's stomach, his gloved hand clutching the blade of a fallen sabre. Every part of the work comes from a statue of the Confederate general Stonewall Jackson that was removed from Charlottesville, Virginia, in 2021.
- North America > United States > California > Los Angeles County > Los Angeles (0.25)
- North America > United States > Virginia > Albemarle County > Charlottesville (0.24)
- North America > United States > New York (0.06)
- (9 more...)
How Easter Island's famed heads 'walked'
Amazon Prime Day is live. See the best deals HERE. Science Archaeology How Easter Island's famed heads'walked' The mystery of how the roughly 130,000 pound statues traveled from quarry to resting place may be solved. Breakthroughs, discoveries, and DIY tips sent every weekday. Rollers, wooden carts, and even alien life are just a few of the theories of how people moved the iconic moai statues of Easter Island (also called Rapa Nui).
- North America > United States > Wisconsin (0.05)
- North America > United States > New York > Broome County > Binghamton (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- (3 more...)
The search for Cleopatra's long-lost tomb leads to sunken seaport
Science Archaeology The search for Cleopatra's long-lost tomb leads to sunken seaport A new documentary explores this 2,000-year-old mystery and a connection to the RMS'Titanic.' Breakthroughs, discoveries, and DIY tips sent every weekday. She's among the most famous leaders in world history, yet archeologists still don't know the location of Egyptian Queen Cleopatra's tomb. Now, National Geographic Explorer and archaeologist Dr. Kathleen Martínez and her team have uncovered a major clue in their 20-year-long hunt: the remains of a port off Egypt's Mediterranean coast. The previously unknown ancient port could have been used to keep the Egyptian queen's remains out of Roman hands.
- North America > United States > Wisconsin (0.05)
- Europe (0.05)
- Atlantic Ocean > Mediterranean Sea (0.05)
- (2 more...)
- Health & Medicine > Therapeutic Area (0.31)
- Media (0.30)
NovelHopQA: Diagnosing Multi-Hop Reasoning Failures in Long Narrative Contexts
Gupta, Abhay, Lu, Michael, Zhu, Kevin, O'Brien, Sean, Sharma, Vasu
Current large language models (LLMs) struggle to answer questions that span tens of thousands of tokens, especially when multi-hop reasoning is involved. While prior benchmarks explore long-context comprehension or multi-hop reasoning in isolation, none jointly vary context length and reasoning depth in natural narrative settings. We introduce NovelHopQA, the first benchmark to evaluate 1-4 hop QA over 64k-128k-token excerpts from 83 full-length public-domain novels. A keyword-guided pipeline builds hop-separated chains grounded in coherent storylines. We evaluate seven state-of-the-art models and apply oracle-context filtering to ensure all questions are genuinely answerable. Human annotators validate both alignment and hop depth. We additionally present retrieval-augmented generation (RAG) evaluations to test model performance when only selected passages are provided instead of the full context. We noticed consistent accuracy drops with increased hops and context length increase, even for frontier models-revealing that sheer scale does not guarantee robust reasoning. Failure-mode analysis highlights common breakdowns such as missed final-hop integration and long-range drift. NovelHopQA offers a controlled diagnostic setting to test multi-hop reasoning at scale. All code and datasets are available at https://novelhopqa.github.io.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Asia > India (0.04)
VIST-GPT: Ushering in the Era of Visual Storytelling with LLMs?
Gado, Mohamed, Taliee, Towhid, Memon, Muhammad, Ignatov, Dmitry, Timofte, Radu
Visual storytelling is an interdisciplinary field combining computer vision and natural language processing to generate cohesive narratives from sequences of images. This paper presents a novel approach that leverages recent advancements in multimodal models, specifically adapting transformer-based architectures and large multimodal models, for the visual storytelling task. Leveraging the large-scale Visual Storytelling (VIST) dataset, our VIST-GPT model produces visually grounded, contextually appropriate narratives. W e address the limitations of traditional evaluation metrics, such as BLEU, METEOR, ROUGE, and CIDEr, which are not suitable for this task. Instead, we utilize RoViST and GROOVIST, novel reference-free metrics designed to assess visual storytelling, focus - ing on visual grounding, coherence, and non-redundancy. These metrics provide a more nuanced evaluation of narrative quality, aligning closely with human judgment.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Oceania > Australia (0.04)
- Europe > United Kingdom (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
Tiny Lidars for Manipulator Self-Awareness: Sensor Characterization and Initial Localization Experiments
Caroleo, Giammarco, Albini, Alessandro, De Martini, Daniele, Barfoot, Timothy D., Maiolino, Perla
For several tasks, ranging from manipulation to inspection, it is beneficial for robots to localize a target object in their surroundings. In this paper, we propose an approach that utilizes coarse point clouds obtained from miniaturized VL53L5CX Time-of-Flight (ToF) sensors (tiny lidars) to localize a target object in the robot's workspace. We first conduct an experimental campaign to calibrate the dependency of sensor readings on relative range and orientation to targets. We then propose a probabilistic sensor model that is validated in an object pose estimation task using a Particle Filter (PF). The results show that the proposed sensor model improves the performance of the localization of the target object with respect to two baselines: one that assumes measurements are free from uncertainty and one in which the confidence is provided by the sensor datasheet.
- North America > Canada > Ontario > Toronto (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Middle East > Cyprus (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
EmbodiedEval: Evaluate Multimodal LLMs as Embodied Agents
Cheng, Zhili, Tu, Yuge, Li, Ran, Dai, Shiqi, Hu, Jinyi, Hu, Shengding, Li, Jiahao, Shi, Yang, Yu, Tianyu, Chen, Weize, Shi, Lei, Sun, Maosong
Multimodal Large Language Models (MLLMs) have shown significant advancements, providing a promising future for embodied agents. Existing benchmarks for evaluating MLLMs primarily utilize static images or videos, limiting assessments to non-interactive scenarios. Meanwhile, existing embodied AI benchmarks are task-specific and not diverse enough, which do not adequately evaluate the embodied capabilities of MLLMs. To address this, we propose EmbodiedEval, a comprehensive and interactive evaluation benchmark for MLLMs with embodied tasks. EmbodiedEval features 328 distinct tasks within 125 varied 3D scenes, each of which is rigorously selected and annotated. It covers a broad spectrum of existing embodied AI tasks with significantly enhanced diversity, all within a unified simulation and evaluation framework tailored for MLLMs. The tasks are organized into five categories: navigation, object interaction, social interaction, attribute question answering, and spatial question answering to assess different capabilities of the agents. We evaluated the state-of-the-art MLLMs on EmbodiedEval and found that they have a significant shortfall compared to human level on embodied tasks. Our analysis demonstrates the limitations of existing MLLMs in embodied capabilities, providing insights for their future development. We open-source all evaluation data and simulation framework at https://github.com/thunlp/EmbodiedEval.
- Workflow (0.45)
- Research Report (0.40)