Goto

Collaborating Authors

 johnstone


M3gan 2.0 review – hit-and-miss sequel replaces horror with action comedy

The Guardian

As the very first image of devil doll sequel M3gan 2.0 emerges on screen, of a desert with the words "somewhere on the Turkish-Iranian border" popping up like it's a Bond movie, you'd be forgiven for double-checking if you're in the right cinema. The original, a grabby artificial intelligence (AI) riff on Child's Play and Annabelle, was a brisk, by-the-numbers domestic horror, released on the first weekend of 2023, a slot usually given to the very worst genre films. M3gan was smarter than most, often sly and frequently funny and introducing what's now become a rarity, an almost instant non-IP pop culture icon, whose virality exploded the film into a surprise smash (raking in over 180m from a 12m budget). Like the films it was inspired by, a franchise was inevitable although where we're taken in M3gan 2.0 was far less of a given. For the follow-up, writer-director Gerard Johnstone has swerved from horror to action while retaining and tweaking the comedy with a release date that's been upgraded to summer blockbuster territory. It doesn't always work – a two-hour runtime that's a little too long, world-saving stakes that are a little too big, funny lines that are a little too not funny – but it's a mostly watchable second-tier event movie that, in a world of inconsequential sequels that fail to justify their existence, will do.


"M3GAN 2.0" Is a Victim of Inflation

The New Yorker

At least it shows its symptoms clearly: inflammation and swelling. In the first film, Gemma (Allison Williams), a robotics engineer, becomes the guardian to her orphaned niece, Cady (Violet McGraw), and tests a new invention, the titular A.I.-powered robot-doll, on her. Cady grows attached to the responsive doll, which is programmed to protect the child and takes to the mission with a mechanical perfection, slaughtering anyone who expresses hostility--and does so with snarky pride in her absolute power. At its core, though, "M3GAN" (like the sequel, directed by Gerard Johnstone) is a family melodrama centered on Gemma's struggles with parenting and Cady's need to bond--plus the robot's quick embrace of human cruelty. The film's failures are painful because its setup is fruitful.


M3GAN,

The New Yorker

The essence of genre is effects without causes--things showing up to fulfill expectations rather than dramatic necessities. "M3GAN," a science-fiction-based horror caper, provides a clever batch of these effects in this gleefully clever twist on the "Frankenstein" theme, and its director, Gerard Johnstone, seems to be laughing up his sleeve throughout. It's that very knowingness, the deftness with which the film gets a rise from viewers, which makes a good time feel hollow. There's a different, far more substantial movie lurking within, yet the virtues of efficiency, clarity, surprise, and wit that enliven the one that's actually onscreen leave its merely implied substance tantalizingly unformed. Allison Williams plays Gemma, a type-A robotics engineer with a big toy company in Seattle, Funki, that prospers by selling cheesily interactive furry toys called PurrPetual Petz.


Distributed Nonparametric Function Estimation: Optimal Rate of Convergence and Cost of Adaptation

Cai, T. Tony, Wei, Hongji

arXiv.org Machine Learning

Distributed minimax estimation and distributed adaptive estimation under communication constraints for Gaussian sequence model and white noise model are studied. The minimax rate of convergence for distributed estimation over a given Besov class, which serves as a benchmark for the cost of adaptation, is established. We then quantify the exact communication cost for adaptation and construct an optimally adaptive procedure for distributed estimation over a range of Besov classes. The results demonstrate significant differences between nonparametric function estimation in the distributed setting and the conventional centralized setting. For global estimation, adaptation in general cannot be achieved for free in the distributed setting. The new technical tools to obtain the exact characterization for the cost of adaptation can be of independent interest.


What needles do sparse neural networks find in nonlinear haystacks

Sardy, Sylvain, Hengartner, Nicolas W, Bonenko, Nikolai, Lin, Yen Ting

arXiv.org Machine Learning

Using a sparsity inducing penalty in artificial neural networks (ANNs) avoids over-fitting, especially in situations where noise is high and the training set is small in comparison to the number of features. For linear models, such an approach provably also recovers the important features with high probability in regimes for a well-chosen penalty parameter. The typical way of setting the penalty parameter is by splitting the data set and performing the cross-validation, which is (1) computationally expensive and (2) not desirable when the data set is already small to be further split (for example, whole-genome sequence data). In this study, we establish the theoretical foundation to select the penalty parameter without cross-validation based on bounding with a high probability the infinite norm of the gradient of the loss function at zero under the zero-feature assumption. Our approach is a generalization of the universal threshold of Donoho and Johnstone (1994) to nonlinear ANN learning. We perform a set of comprehensive Monte Carlo simulations on a simple model, and the numerical results show the effectiveness of the proposed approach.