leak
Meet Scotland's Whisky-Sniffing Robot Dog
Inside Dewar's cavernous whisky warehouses, man's best mechanical friend--a Boston Dynamics robot dog with an ethanol sensor for a nose--is on the hunt for leaky barrels. Wooden barrels are what make the magic happen in your favorite bottle of whisky . At Bacardi Limited, the world's largest privately held spirits company, barrel leakage is a massive headache. Consider the company's Dewar's blended Scotch whisky brand (just one of the dozens it owns). Most of the time, Dewar's will have over 100 warehouses full of aging barrels of whisky, 25,000 casks in each one.
- North America > United States > California (0.04)
- Europe > United Kingdom > Scotland > Highland > Nairn (0.04)
- Europe > Slovakia (0.04)
- Europe > Czechia (0.04)
- Materials (0.83)
- Media (0.71)
- Leisure & Entertainment > Sports (0.70)
Regularized Behavior Cloning for Blocking the Leakage of Past Action Information
For partially observable environments, imitation learning with observation histories (ILOH) assumes that control-relevant information is sufficiently captured in the observation histories for imitating the expert actions. In the offline setting wherethe agent is required to learn to imitate without interaction with the environment, behavior cloning (BC) has been shown to be a simple yet effective method for imitation learning. However, when the information about the actions executed in the past timesteps leaks into the observation histories, ILOH via BC often ends up imitating its own past actions. In this paper, we address this catastrophic failure by proposing a principled regularization for BC, which we name Past Action Leakage Regularization (PALR). The main idea behind our approach is to leverage the classical notion of conditional independence to mitigate the leakage. We compare different instances of our framework with natural choices of conditional independence metric and its estimator. The result of our comparison advocates the use of a particular kernel-based estimator for the conditional independence metric. We conduct an extensive set of experiments on benchmark datasets in order to assess the effectiveness of our regularization method. The experimental results show that our method significantly outperforms prior related approaches, highlighting its potential to successfully imitate expert actions when the past action information leaks into the observation histories.
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Asia > Russia (0.04)
- Media (1.00)
- Leisure & Entertainment > Sports (1.00)
- Information Technology > Security & Privacy (1.00)
- (3 more...)
Enhanced Water Leak Detection with Convolutional Neural Networks and One-Class Support Vector Machine
Leonzio, Daniele Ugo, Bestagini, Paolo, Marcon, Marco, Tubaro, Stefano
Water is a critical resource that must be managed efficiently. However, a substantial amount of water is lost each year due to leaks in Water Distribution Networks (WDNs). This underscores the need for reliable and effective leak detection and localization systems. In recent years, various solutions have been proposed, with data-driven approaches gaining increasing attention due to their superior performance. In this paper, we propose a new method for leak detection. The method is based on water pressure measurements acquired at a series of nodes of a WDN. Our technique is a fully data-driven solution that makes only use of the knowledge of the WDN topology, and a series of pressure data acquisitions obtained in absence of leaks. The proposed solution is based on an feature extractor and a one-class Support Vector Machines (SVM) trained on no-leak data, so that leaks are detected as anomalies. The results achieved on a simulate dataset using the Modena WDN demonstrate that the proposed solution outperforms recent methods for leak detection.
- Asia > Vietnam > Hanoi > Hanoi (0.05)
- Europe > Italy > Lombardy > Milan (0.05)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
Leak@$k$: Unlearning Does Not Make LLMs Forget Under Probabilistic Decoding
Reisizadeh, Hadi, Ruan, Jiajun, Chen, Yiwei, Pal, Soumyadeep, Liu, Sijia, Hong, Mingyi
Unlearning in large language models (LLMs) is critical for regulatory compliance and for building ethical generative AI systems that avoid producing private, toxic, illegal, or copyrighted content. Despite rapid progress, in this work we show that \textit{almost all} existing unlearning methods fail to achieve true forgetting in practice. Specifically, while evaluations of these `unlearned' models under deterministic (greedy) decoding often suggest successful knowledge removal using standard benchmarks (as has been done in the literature), we show that sensitive information reliably resurfaces when models are sampled with standard probabilistic decoding. To rigorously capture this vulnerability, we introduce \texttt{leak@$k$}, a new meta-evaluation metric that quantifies the likelihood of forgotten knowledge reappearing when generating $k$ samples from the model under realistic decoding strategies. Using three widely adopted benchmarks, TOFU, MUSE, and WMDP, we conduct the first large-scale, systematic study of unlearning reliability using our newly defined \texttt{leak@$k$} metric. Our findings demonstrate that knowledge leakage persists across methods and tasks, underscoring that current state-of-the-art unlearning techniques provide only limited forgetting and highlighting the urgent need for more robust approaches to LLM unlearning.
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- Europe > Montenegro (0.04)
- Europe > Poland (0.04)
- (10 more...)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.47)
Covert Quantum Learning: Privately and Verifiably Learning from Quantum Data
Anand, Abhishek, Caro, Matthias C., Karchmer, Ari, Mutreja, Saachi
Quantum learning from remotely accessed quantum compute and data must address two key challenges: verifying the correctness of data and ensuring the privacy of the learner's data-collection strategies and resulting conclusions. The covert (verifiable) learning model of Canetti and Karchmer (TCC 2021) provides a framework for endowing classical learning algorithms with such guarantees. In this work, we propose models of covert verifiable learning in quantum learning theory and realize them without computational hardness assumptions for remote data access scenarios motivated by established quantum data advantages. We consider two privacy notions: (i) strategy-covertness, where the eavesdropper does not gain information about the learner's strategy; and (ii) target-covertness, where the eavesdropper does not gain information about the unknown object being learned. We show: Strategy-covert algorithms for making quantum statistical queries via classical shadows; Target-covert algorithms for learning quadratic functions from public quantum examples and private quantum statistical queries, for Pauli shadow tomography and stabilizer state learning from public multi-copy and private single-copy quantum measurements, and for solving Forrelation and Simon's problem from public quantum queries and private classical queries, where the adversary is a unidirectional or i.i.d. ancilla-free eavesdropper. The lattermost results in particular establish that the exponential separation between classical and quantum queries for Forrelation and Simon's problem survives under covertness constraints. Along the way, we design covert verifiable protocols for quantum data acquisition from public quantum queries which may be of independent interest. Overall, our models and corresponding algorithms demonstrate that quantum advantages are privately and verifiably achievable even with untrusted, remote data.
- Europe > Germany (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (6 more...)
Massive Leak Shows How a Chinese Company Is Exporting the Great Firewall to the World
Geedge Networks, a company with ties to the founder of China's mass censorship infrastructure, is selling its censorship and surveillance systems to at least four other countries in Asia and Africa. A leak of more than 100,000 documents shows that a little-known Chinese company has been quietly selling censorship systems seemingly modeled on the Great Firewall to governments around the world. Geedge Networks, a company founded in 2018 that counts the "father" of China's massive censorship infrastructure as one of its investors, styles itself as a network-monitoring provider, offering business-grade cybersecurity tools to "gain comprehensive visibility and minimize security risks" for its customers, the documents show. In fact, researchers found that it has been operating a sophisticated system that allows users to monitor online information, block certain websites and VPN tools, and spy on specific individuals. Researchers who reviewed the leaked material found that the company is able to package advanced surveillance capabilities into what amounts to a commercialized version of the Great Firewall--a wholesale solution with both hardware that can be installed in any telecom data center and software operated by local government officers.
- Asia > China (0.60)
- North America > Canada (0.14)
- Asia > Russia (0.14)
- (15 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (0.89)