Navy
Drones hit 'Freedom Flotilla' Gaza aid ship in international waters
A ship carrying aid to Gaza in a bid to break Israel's blockade has been hit by drones in international waters off Malta, according to the Freedom Flotilla Coalition (FFC), the group that organised the mission. The FFC said in a statement on Friday that the vessel, now located 14 nautical miles (25km) from Malta, was the target of two drone strikes while on its way to Gaza. The ship had been seeking to deliver aid to the besieged enclave, where aid groups warn people are struggling to survive following a two-month total blockade by Israel. "Armed drones attacked the front of an unarmed civilian vessel twice, causing a fire and a substantial breach in the hull," the group said. The statement did not directly accuse Israel of carrying out the attack.
REVEALED: The UFO sightings taken seriously by the US government
A'flame in the sky,' eerie red glowing objects and swarms of UFOs over military bases are just some of the many sightings that have gravely concerned the US government. There are dozens of unsolved cases going back to the 1960s that occurred over nuclear missile installations, Navy ships and a desert in New Mexico. The FBI, CIA, and other government branches have spent years looking into these reports, but have yet to determine what the objects were and where they came from. One report in 2019 detailed how'drones' appeared over Colorado, Nebraska, Wyoming, and Kansas as locals reported spying a mothership hanging in the sky. In just the last few months, the skies over New Jersey were filled with unidentified aircraft and drones that required a formal response from both the Biden and Trump presidencies.
Intrinsic Dimension Estimation for Robust Detection of AI-Generated Texts
Rapidly increasing quality of AI-generated content makes it difficult to distinguish between human and AI-generated texts, which may lead to undesirable consequences for society. Therefore, it becomes increasingly important to study the properties of human texts that are invariant over different text domains and varying proficiency of human writers, can be easily calculated for any language, and can robustly separate natural and AI-generated texts regardless of the generation model and sampling method. In this work, we propose such an invariant for humanwritten texts, namely the intrinsic dimensionality of the manifold underlying the set of embeddings for a given text sample. We show that the average intrinsic dimensionality of fluent texts in a natural language is hovering around the value 9 for several alphabet-based languages and around 7 for Chinese, while the average intrinsic dimensionality of AI-generated texts for each language is 1.5 lower, with a clear statistical separation between human-generated and AI-generated distributions. This property allows us to build a score-based artificial text detector. The proposed detector's accuracy is stable over text domains, generator models, and human writer proficiency levels, outperforming SOTA detectors in model-agnostic and cross-domain scenarios by a significant margin.
The illusory reality of WWI dazzle camouflage, re-examined
During World War I, Allied navies started implementing shocking, cubist-inspired "dazzle" paint jobs on ships. The now-iconic geometric designs were intended to throw off the visual perception of German U-boats crews and prevent them from accurately targeting ships with torpedoes. Conventional wisdom claims the bizarre camouflage pattern worked and helped turn the tide of Great War naval battles. But new research reevaluating one of the only rigorous studies testing that hypothesis suggests those conclusions were probably overblown. Researchers now claim another phenomena known as the "horizon effect" may have actually done more to throw off submarine gunners than the wacky aesthetic.
Uncertainty Expression for Human-Robot Task Communication
Porfirio, David, Roberts, Mark, Hiatt, Laura M.
An underlying assumption of many existing approaches to human-robot task communication is that the robot possesses a sufficient amount of environmental domain knowledge, including the locations of task-critical objects. This assumption is unrealistic if the locations of known objects change or have not yet been discovered by the robot. In this work, our key insight is that in many scenarios, robot end users possess more scene insight than the robot and need ways to express it. Presently, there is a lack of research on how solutions for collecting end-user scene insight should be designed. We thereby created an Uncertainty Expression System (UES) to investigate how best to elicit end-user scene insight. The UES allows end users to convey their knowledge of object uncertainty using either: (1) a precision interface that allows meticulous expression of scene insight; (2) a painting interface by which users create a heat map of possible object locations; and (3) a ranking interface by which end users express object locations via an ordered list. We then conducted a user study to compare the effectiveness of these approaches based on the accuracy of scene insight conveyed to the robot, the efficiency at which end users are able to express this scene insight, and both usability and task load. Results indicate that the rank interface is more user friendly and efficient than the precision interface, and that the paint interface is the least accurate.
Training a Generally Curious Agent
Tajwar, Fahim, Jiang, Yiding, Thankaraj, Abitha, Rahman, Sumaita Sadia, Kolter, J Zico, Schneider, Jeff, Salakhutdinov, Ruslan
Efficient exploration is essential for intelligent systems interacting with their environment, but existing language models often fall short in scenarios that require strategic information gathering. In this paper, we present PAPRIKA, a fine-tuning approach that enables language models to develop general decision-making capabilities that are not confined to particular environments. By training on synthetic interaction data from different tasks that require diverse strategies, PAPRIKA teaches models to explore and adapt their behavior on a new task based on environment feedback in-context without more gradient updates. Experimental results show that models fine-tuned with PAPRIKA can effectively transfer their learned decision-making capabilities to entirely unseen tasks without additional training. Unlike traditional training, our approach's primary bottleneck lies in sampling useful interaction data instead of model updates. To improve sample efficiency, we propose a curriculum learning strategy that prioritizes sampling trajectories from tasks with high learning potential. These results suggest a promising path towards AI systems that can autonomously solve novel sequential decision-making problems that require interactions with the external world.