Goto

Collaborating Authors

 mustache


A Kid With a Fake Mustache Tricked an Online Age-Verification Tool

WIRED

To stop children from bypassing its age checks, Meta is revamping its age-verification tools with an AI system that analyzes images and videos for "visual cues," such as height and bone structure. Meta is beefing up its age-verification mechanisms with an AI system that analyzes images and videos on Instagram and Facebook for "visual cues," such as height and bone structure, to identify and delete accounts of users under the age of 13. The company announced the move amid a wave of cases in which hundreds of children have managed to evade social network access restrictions, even through simple tricks such as drawing on a mustache. The new approach is part of a series of measures Meta adopted as part of an AI-based security strategy designed to correct the limitations of traditional methods, which rely heavily on self-reported age. With this change, the company seeks to reduce the ease with which minors access platforms that, in theory, are restricted to them.




Conditional-$t^3$VAE: Equitable Latent Space Allocation for Fair Generation

Bouayed, Aymene Mohammed, Deslauriers-Gauthier, Samuel, Iaccovelli, Adrian, Naccache, David

arXiv.org Machine Learning

Variational Autoencoders (VAEs) with global priors mirror the training set's class frequency in latent space, underrepresenting tail classes and reducing generative fairness on imbalanced datasets. While $t^3$VAE improves robustness via heavy-tailed Student's t-distribution priors, it still allocates latent volume proportionally to the class frequency.In this work, we address this issue by explicitly enforcing equitable latent space allocation across classes. To this end, we propose Conditional-$t^3$VAE, which defines a per-class \mbox{Student's t} joint prior over latent and output variables, preventing dominance by majority classes. Our model is optimized using a closed-form objective derived from the $γ$-power divergence. Moreover, for class-balanced generation, we derive an equal-weight latent mixture of Student's t-distributions. On SVHN-LT, CIFAR100-LT, and CelebA, Conditional-$t^3$VAE consistently achieves lower FID scores than both $t^3$VAE and Gaussian-based VAE baselines, particularly under severe class imbalance. In per-class F1 evaluations, Conditional-$t^3$VAE also outperforms the conditional Gaussian VAE across all highly imbalanced settings. While Gaussian-based models remain competitive under mild imbalance ratio ($ρ\lesssim 3$), our approach substantially improves generative fairness and diversity in more extreme regimes.


The Law Professor Flying Surveillance Drones in Ukraine

The New Yorker

Vasyl Bilous's last name means "white mustache." His actual mustache is dark brown with a hint of gray. He's worn one since high school. In a picture that he took on the first day of Russia's full-scale invasion of Ukraine, Vasyl has a chevron mustache, a neat barbershop cut--close on the sides, paintbrush-thick on top. At the time, he was an assistant professor of forensics at the National Law University, in Kharkiv, and a lawyer in private practice.


MUSTACHE: Multi-Step-Ahead Predictions for Cache Eviction

Tolomei, Gabriele, Takanen, Lorenzo, Pinelli, Fabio

arXiv.org Artificial Intelligence

In this work, we propose MUSTACHE, a new page cache replacement algorithm whose logic is learned from observed memory access requests rather than fixed like existing policies. We formulate the page request prediction problem as a categorical time series forecasting task. Then, our method queries the learned page request forecaster to obtain the next $k$ predicted page memory references to better approximate the optimal B\'el\'ady's replacement algorithm. We implement several forecasting techniques using advanced deep learning architectures and integrate the best-performing one into an existing open-source cache simulator. Experiments run on benchmark datasets show that MUSTACHE outperforms the best page replacement heuristic (i.e., exact LRU), improving the cache hit ratio by 1.9% and reducing the number of reads/writes required to handle cache misses by 18.4% and 10.3%.


Counterfactual Fairness with Disentangled Causal Effect Variational Autoencoder

Kim, Hyemi, Shin, Seungjae, Jang, JoonHo, Song, Kyungwoo, Joo, Weonyoung, Kang, Wanmo, Moon, Il-Chul

arXiv.org Artificial Intelligence

The problem of fair classification can be mollified if we develop a method to remove the embedded sensitive information from the classification features. This line of separating the sensitive information is developed through the causal inference, and the causal inference enables the counterfactual generations to contrast the what-if case of the opposite sensitive attribute. Along with this separation with the causality, a frequent assumption in the deep latent causal model defines a single latent variable to absorb the entire exogenous uncertainty of the causal graph. However, we claim that such structure cannot distinguish the 1) information caused by the intervention (i.e., sensitive variable) and 2) information correlated with the intervention from the data. Therefore, this paper proposes Disentangled Causal Effect Variational Autoencoder (DCEVAE) to resolve this limitation by disentangling the exogenous uncertainty into two latent variables: either 1) independent to interventions or 2) correlated to interventions without causality. Particularly, our disentangling approach preserves the latent variable correlated to interventions in generating counterfactual examples. We show that our method estimates the total effect and the counterfactual effect without a complete causal graph. By adding a fairness regularization, DCEVAE generates a counterfactual fair dataset while losing less original information. Also, DCEVAE generates natural counterfactual images by only flipping sensitive information. Additionally, we theoretically show the differences in the covariance structures of DCEVAE and prior works from the perspective of the latent disentanglement.


Causal Adversarial Network for Learning Conditional and Interventional Distributions

Moraffah, Raha, Moraffah, Bahman, Karami, Mansooreh, Raglin, Adrienne, Liu, Huan

arXiv.org Machine Learning

We propose a generative Causal Adversarial Network (CAN) for learning and sampling from conditional and interventional distributions. In contrast to the existing CausalGAN which requires the causal graph to be given, our proposed framework learns the causal relations from the data and generates samples accordingly. The proposed CAN comprises a two-fold process namely Label Generation Network (LGN) and Conditional Image Generation Network (CIGN). The LGN is a GAN-based architecture which learns and samples from the causal model over labels. The sampled labels are then fed to CIGN, a conditional GAN architecture, which learns the relationships amongst labels and pixels and pixels themselves and generates samples based on them. This framework is equipped with an intervention mechanism which enables. the model to generate samples from interventional distributions. We quantitatively and qualitatively assess the performance of CAN and empirically show that our model is able to generate both interventional and conditional samples without having access to the causal graph for the application of face generation on CelebA data.


How deep learning could revolutionize broadcasting

#artificialintelligence

Max Kalmykov is the VP of Media and Entertainment at DataArt. Broadcasters and movie studios alike are starting to explore the huge potential of modern technologies to bring a new generation of filmed entertainment to our TV sets and cinemas. Artificial intelligence, machine learning, and deep learning are the buzzwords that excite video executives with promises of revolutionary new abilities for video creation and editing. Deep learning, in particular, is the new frontier for the video industry, allowing video professional to do things automatically that would have taken weeks of work in the past, as well as some things that wouldn't have been possible at all. How is deep learning different from other machine learning algorithms?


The Data Driven Partier: Movie Mustache – Towards Data Science

#artificialintelligence

The concept behind'Movie Mustache' is simple, but revolutionary. This game was foreign to me until a few weeks ago when I got to experience it watching the Adam Sandler classic, The Waterboy. As amazing as it was to watch every single character wear the handlebar mustaches that were taped to the TV, that paled in comparison to the problem statement that followed: How can we place the mustaches to maximize our drinking as a group? As always, the code seen here can be viewed in its entirety on my GitHub. Due to the availability of facial recognition packages, this problem can be solved in less than a day with the right approach.