tvc
Training Set Reconstruction from Differentially Private Forests: How Effective is DP?
Gorgé, Alice, Ferry, Julien, Gambs, Sébastien, Vidal, Thibaut
Recent research has shown that machine learning models are vulnerable to privacy attacks targeting their training data. Differential privacy (DP) has become a widely adopted countermeasure, as it offers rigorous privacy protections. In this paper, we introduce a reconstruction attack targeting state-of-the-art $\varepsilon$-DP random forests. By leveraging a constraint programming model that incorporates knowledge of the forest's structure and DP mechanism characteristics, our approach formally reconstructs the most likely dataset that could have produced a given forest. Through extensive computational experiments, we examine the interplay between model utility, privacy guarantees, and reconstruction accuracy across various configurations. Our results reveal that random forests trained with meaningful DP guarantees can still leak substantial portions of their training data. Specifically, while DP reduces the success of reconstruction attacks, the only forests fully robust to our attack exhibit predictive performance no better than a constant classifier. Building on these insights, we provide practical recommendations for the construction of DP random forests that are more resilient to reconstruction attacks and maintain non-trivial predictive performance.
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Asia > Taiwan (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Constraint-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.76)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level Stability and High-Level Behavior
Block, Adam, Jadbabaie, Ali, Pfrommer, Daniel, Simchowitz, Max, Tedrake, Russ
Training dynamic agents from datasets of expert examples, known as imitation learning, promises to take advantage of the plentiful demonstrations available in the modern data environment, in an analogous manner to the recent successes of language models conducting unsupervised learning on enormous corpora of text [68, 71]. Imitation learning is especially exciting in robotics, where mass stores of pre-recorded demonstrations on Youtube [1] or cheaply collected simulated trajectories [43, 20] can be converted into learned robotic policies. For imitation learning to be a viable path toward generalist robotic behavior, it needs to be able to both represent and execute the complex behaviors exhibited in the demonstrated data. An approach that has shown tremendous promise is generative behavior cloning: fitting generative models, such as diffusion models [2, 19, 34], to expert demonstrations with pure supervised learning. In this paper, we ask: Under what conditions can generative behavior cloning imitate arbitrarily complex expert behavior? In this paper, we are interested in how algorithmic choices interface with the dynamics of the agent's environment to render imitation possible. The key challenge separating imitation learning from vanilla supervised learning is one of compounding error: when the learner executes the trained behavior in its environment, small mistakes can accumulate into larger ones; this in turn may bring the agent to regions of state space not seen during training, leading to larger-still deviations from intended trajectories.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- (3 more...)
- Information Technology (0.46)
- Energy (0.45)
Tell Me What Happened: Unifying Text-guided Video Completion via Multimodal Masked Video Generation
Fu, Tsu-Jui, Yu, Licheng, Zhang, Ning, Fu, Cheng-Yang, Su, Jong-Chyi, Wang, William Yang, Bell, Sean
Generating a video given the first several static frames is challenging as it anticipates reasonable future frames with temporal coherence. Besides video prediction, the ability to rewind from the last frame or infilling between the head and tail is also crucial, but they have rarely been explored for video completion. Since there could be different outcomes from the hints of just a few frames, a system that can follow natural language to perform video completion may significantly improve controllability. Inspired by this, we introduce a novel task, text-guided video completion (TVC), which requests the model to generate a video from partial frames guided by an instruction. We then propose Multimodal Masked Video Generation (MMVG) to address this TVC task. During training, MMVG discretizes the video frames into visual tokens and masks most of them to perform video completion from any time point. At inference time, a single MMVG model can address all 3 cases of TVC, including video prediction, rewind, and infilling, by applying corresponding masking conditions. We evaluate MMVG in various video scenarios, including egocentric, animation, and gaming. Extensive experimental results indicate that MMVG is effective in generating high-quality visual appearances with text guidance for TVC.
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Google staff call out treatment of temp workers in 'historic' show of solidarity
More than 900 Google workers have signed a letter objecting to the tech giant's treatment of temporary contractors, in what organizers are calling "an historical coalition" between Google's full-time employees [FTEs] and temps, vendors and contractors [TVCs]. In March, Google abruptly shortened the contracts of 34 temp workers on the "personality" team for Google Assistant – the Alexa-like digital assistant that reads you the weather, manages your calendar, sends a text message, or calls you an Uber through your phone or smart speaker. The cuts, which affected contractors around the globe, reinvigorated the debate over Googles's extensive use of TVCs, amid a growing labor movement within the company. In recent months, Google FTEs and TVCs have been increasingly vocal in protesting both their working conditions and the ethics of their employer. "For years, Google has boasted of its ability to scale up and down very quickly, and [sic] vocal in its ability to'navigate changes with agility,'" the letter reads.
- North America > United States > New York (0.05)
- Asia > South Korea > Seoul > Seoul (0.05)
Inside Google's shadow workforce of contract laborers
They eat in Google's cafeterias, ride its commuter shuttles and work alongside its celebrated geeks. They aren't entitled to stock and can't enter certain offices. Many don't have health insurance. Before each weekly Google all-hands meeting, trays of hors d'oeuvres and, sometimes, kegs of beer are carted into an auditorium and satellite offices around the globe for employees, who wear white badges. Those without white badges are asked to return to their desks. Google's Alphabet Inc. employs hordes of these red-badged contract workers in addition to its full-fledged staff. They serve meals and clean offices.
- North America > United States > New York (0.04)
- North America > United States > Iowa (0.04)
- North America > United States > California > Santa Clara County > San Jose (0.04)
- (5 more...)
- Law (1.00)
- Information Technology > Services (1.00)
- Government > Regional Government > North America Government > United States Government (0.69)