owen
'I don't take no for an answer': how a small group of women changed the law on deepfake porn
Charlotte Owen: 'The Lords were blown away by these brilliant women.' Charlotte Owen: 'The Lords were blown away by these brilliant women.' 'I don't take no for an answer': how a small group of women changed the law on deepfake porn For Jodie*, watching the conviction of her best friend, and knowing she helped secure it, felt at first like a kind of victory. It was certainly more than most survivors of deepfake image-based abuse could expect. They had met as students and bonded over their shared love of music. In the years since graduation, he'd also become her support system, the friend she reached for each time she learned that her images and personal details had been posted online without her consent.
- North America > United States (0.29)
- Europe > United Kingdom > Scotland (0.05)
- Europe > United Kingdom > Northern Ireland (0.05)
- (3 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Government > Regional Government (1.00)
- (2 more...)
Owen Sampling Accelerates Contribution Estimation in Federated Learning
KhademSohi, Hossein, Hemmati, Hadi, Zhou, Jiayu, Drew, Steve
Federated Learning (FL) aggregates information from multiple clients to train a shared global model without exposing raw data. Accurately estimating each client's contribution is essential not just for fair rewards, but for selecting the most useful clients so the global model converges faster. The Shapley value is a principled choice, yet exact computation scales exponentially with the number of clients, making it infeasible for large federations. We propose FedOwen, an efficient framework that uses Owen sampling to approximate Shapley values under the same total evaluation budget as existing methods while keeping the approximation error small. In addition, FedOwen uses an adaptive client selection strategy that balances exploiting high-value clients with exploring under-sampled ones, reducing bias and uncovering rare but informative data. Under a fixed valuation cost, FedOwen achieves up to 23 percent higher final accuracy within the same number of communication rounds compared to state-of-the-art baselines on non-IID benchmarks.
- North America > Canada > Ontario > Toronto (0.14)
- North America > Canada > Alberta > Census Division No. 6 > Calgary Metropolitan Region > Calgary (0.14)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
Neural Low-Discrepancy Sequences
Van Huffel, Michael Etienne, Kirk, Nathan, Chahine, Makram, Rus, Daniela, Rusch, T. Konstantin
Low-discrepancy points are designed to efficiently fill the space in a uniform manner. This uniformity is highly advantageous in many problems in science and engineering, including in numerical integration, computer vision, machine perception, computer graphics, machine learning, and simulation. Whereas most previous low-discrepancy constructions rely on abstract algebra and number theory, Message-Passing Monte Carlo (MPMC) was recently introduced to exploit machine learning methods for generating point sets with lower discrepancy than previously possible. However, MPMC is limited to generating point sets and cannot be extended to low-discrepancy sequences (LDS), i.e., sequences of points in which every prefix has low discrepancy, a property essential for many applications. To address this limitation, we introduce Neural Low-Discrepancy Sequences ($NeuroLDS$), the first machine learning-based framework for generating LDS. Drawing inspiration from classical LDS, we train a neural network to map indices to points such that the resulting sequences exhibit minimal discrepancy across all prefixes. To this end, we deploy a two-stage learning process: supervised approximation of classical constructions followed by unsupervised fine-tuning to minimize prefix discrepancies. We demonstrate that $NeuroLDS$ outperforms all previous LDS constructions by a significant margin with respect to discrepancy measures. Moreover, we demonstrate the effectiveness of $NeuroLDS$ across diverse applications, including numerical integration, robot motion planning, and scientific machine learning. These results highlight the promise and broad significance of Neural Low-Discrepancy Sequences. Our code can be found at https://github.com/camail-official/neuro-lds.
- North America > United States > Illinois (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (4 more...)
DEL-ToM: Inference-Time Scaling for Theory-of-Mind Reasoning via Dynamic Epistemic Logic
Wu, Yuheng, Xie, Jianwen, Zhang, Denghui, Xu, Zhaozhuo
Theory-of-Mind (ToM) tasks pose a unique challenge for large language models (LLMs), which often lack the capability for dynamic logical reasoning. In this work, we propose DEL-ToM, a framework that improves verifiable ToM reasoning through inference-time scaling rather than architectural changes. Our approach decomposes ToM tasks into a sequence of belief updates grounded in Dynamic Epistemic Logic (DEL), enabling structured and verifiable dynamic logical reasoning. We use data generated automatically via a DEL simulator to train a verifier, which we call the Process Belief Model (PBM), to score each belief update step. During inference, the PBM evaluates candidate belief traces from the LLM and selects the highest-scoring one. This allows LLMs to allocate extra inference-time compute to yield more transparent reasoning. Experiments across model scales and benchmarks show that DEL-ToM consistently improves performance, demonstrating that verifiable belief supervision significantly enhances LLMs' ToM capabilities without retraining. Code is available at https://github.com/joel-wu/DEL-ToM.
- Asia > Middle East > Jordan (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
AI, bot farms and innocent indie victims: how music streaming became a hotbed of fraud and fakery
There is a battle gripping the music business today around the manipulation of streaming services – and innocent indie artists are the collateral damage. Fraudsters are flooding Spotify, Apple Music and the rest with AI-generated tracks, to try and hoover up the royalties generated by people listening to them. These tracks are cheap, quick and easy to make, with Deezer estimating in April that over 20,000 fully AI-created tracks – that's 18% of new tracks – were being ingested into its platform daily, almost double the number in January. The fraudsters often then use bots, AI or humans to endlessly listen to these fake songs and generate revenue, while others are exploiting upload services to get fake songs put on real artists' pages and siphon off royalties that way. Spotify fines the worst offenders and says it puts "significant engineering resources and research into detecting, mitigating, and removing artificial streaming activity", while Apple Music claims "less than 1% of all streams are manipulated" on its service.
- Europe > Germany (0.06)
- South America > Brazil (0.05)
- North America > Canada (0.05)
- (5 more...)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
Letters from Our Readers
I have a Salvadoran godson who has been in the black hole of President Nayib Bukele's jails for more than three years. He has not been allowed to send letters or to receive visits from anybody, including lawyers. My family and I have tried, through multiple channels, to get him out, without success. This grotesque mockery of justice is what President Trump is trying to normalize, and it is hard for me to understand how we as a country have fallen so far so fast. How does emulating a dictatorship, like that in El Salvador, prove that America is "great"?
- North America > El Salvador (0.27)
- North America > United States > Oregon > Multnomah County > Portland (0.06)
- North America > United States > Ohio (0.06)
- North America > United States > District of Columbia > Washington (0.06)
- Government (0.37)
- Health & Medicine > Therapeutic Area (0.34)
Randomized Quasi-Monte Carlo Features for Kernel Approximation
We investigate the application of randomized quasi-Monte Carlo (RQMC) methods in random feature approximations for kernel-based learning. Compared to the classical Monte Carlo (MC) approach \citep{rahimi2007random}, RQMC improves the deterministic approximation error bound from $O_P(1/\sqrt{n})$ to $O(1/M)$ (up to logarithmic factors), matching the rate achieved by quasi-Monte Carlo (QMC) methods \citep{huangquasi}. Beyond the deterministic error bound guarantee, we further establish additional average error bounds for RQMC features: some requiring weaker assumptions and others significantly reducing the exponent of the logarithmic factor. In the context of kernel ridge regression, we show that RQMC features offer computational advantages over MC features while preserving the same statistical error rate. Empirical results further show that RQMC methods maintain stable performance in both low and moderately high-dimensional settings, unlike QMC methods, which suffer from significant performance degradation as dimension increases.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
Actually, the New em Zelda /em Is About Ethics in Journalism
In The Legend of Zelda, Hyrule is a land constantly imperiled by maleficent lords of shadow, cataclysmic volcanic eruptions, and an intangible sense of paranormal gloom that sucks the will to live out of every man, Zora, and Goron. Its nations are stratified across the land, and all of them live under the muzzling bounds of an autocratic royal bloodline. In other words, the people of Zelda need a free press, and in the newest game of the franchise--called Tears of the Kingdom--Hylians have discovered that occasionally, the pen is mightier than the sword. Those who embark on the adventure will discover ancient vistas, glorious ruins, and, most surprisingly, a proud celebration of the power of journalism. At last, Link is asking the tough questions.
- Information Technology > Artificial Intelligence > Games (0.36)
- Information Technology > Communications (0.32)
August AMA Transcript
As usual, we are holding this AMA on the regular basis with our CEO, Mr Oven Tao. Since we have some deliverables and that we would like to make use of this AMA to announce we have achieved and particularly for the topics that interest most of the Community members. Then the second part we will go on to the question and answer sections we have for this month. We have over 90 questions from our community. Some are I think the reason that we have this many questions. There was some newcomers and they raised many questions which are they can find from our website and but still I am select some of them and then to be asked in this AMA. And in this AMA as usual we have questions and for our CEO Owen to address. So let's start with the first session which is about the progress that we have made according to our road map. Owen are you ready to start?
Hyperparameter Tuning with Python: Boost your machine learning model's performance via hyperparameter tuning: Owen, Louis: 9781803235875: Amazon.com: Books
You'll start with an introduction to hyperparameter tuning and understand why it's important. Next, you'll learn the best methods for hyperparameter tuning for a variety of use cases and specific algorithm types. This book will not only cover the usual grid or random search but also other powerful underdog methods. Individual chapters are also dedicated to the three main groups of hyperparameter tuning methods: exhaustive search, heuristic search, Bayesian optimization, and multi-fidelity optimization. Later, you will learn about top frameworks like Scikit, Hyperopt, Optuna, NNI, and DEAP to implement hyperparameter tuning.