The Parameterized Complexity of Cascading Portfolio Scheduling

Neural Information Processing Systems

Cascading portfolio scheduling is a static algorithm selection strategy which uses a sample of test instances to compute an optimal ordering (a cascading schedule) of a portfolio of available algorithms. The algorithms are then applied to each future instance according to this cascading schedule, until some algorithm in the schedule succeeds. Cascading scheduling has proven to be effective in several applications, including QBF solving and generation of ImageNet classification models. It is known that the computation of an optimal cascading schedule in the offline phase is NP-hard. In this paper we study the parameterized complexity of this problem and establish its fixed-parameter tractability by utilizing structural properties of the success relation between algorithms and test instances. Our findings are significant as they reveal that in spite of the intractability of the problem in its general form, one can indeed exploit sparseness or density of the success relation to obtain non-trivial runtime guarantees for finding an optimal cascading schedule.


Man who posted deepfake images of prominent Australian women could face 450,000 penalty

The Guardian

The online safety regulator wants a 450,000 maximum penalty imposed on a man who posted deepfake images of prominent Australian women to a website, in the first case of its kind heard in an Australian court. The eSafety commissioner has launched proceedings against Anthony Rotondo over his failure to remove "intimate images" of several prominent Australian women from a deepfake pornography website. The federal court has kept the names of the women confidential. Rotondo initially refused to comply with the order while he was based in the Philippines, the court heard, but the commissioner launched the case once he returned to Australia. Rotondo posted the images to the MrDeepFakes website, which has since been shut down.


Watch: Humanoid robots fight in Chinese kick-boxing competition

BBC News

Two humanoid robots traded punches while fans watched on, in a competition held in Hangzhou, China, on Sunday. The fight was part of the China Media Group World Robot Competition and featured robots developed by Unitree Robotics. The event included both fighting demonstrations and matches, marking a world-first combat sports event featuring humanoid robots.


Batch-Instance Normalization for Adaptively Style-Invariant Neural Networks

Neural Information Processing Systems

Real-world image recognition is often challenged by the variability of visual styles including object textures, lighting conditions, filter effects, etc. Although these variations have been deemed to be implicitly handled by more training data and deeper networks, recent advances in image style transfer suggest that it is also possible to explicitly manipulate the style information. Extending this idea to general visual recognition problems, we present Batch-Instance Normalization (BIN) to explicitly normalize unnecessary styles from images. Considering certain style features play an essential role in discriminative tasks, BIN learns to selectively normalize only disturbing styles while preserving useful styles. The proposed normalization module is easily incorporated into existing network architectures such as Residual Networks, and surprisingly improves the recognition performance in various scenarios. Furthermore, experiments verify that BIN effectively adapts to completely different tasks like object classification and style transfer, by controlling the tradeoff between preserving and removing style variations. BIN can be implemented with only a few lines of code using popular deep learning frameworks.



Xin Li

Neural Information Processing Systems

The need to analyze graphs is ubiquitous across various fields, from social networks to biological research and recommendation systems. Therefore, enabling the ability of large language models (LLMs) to process graphs is an important step toward more advanced general intelligence. However, current LLM benchmarks on graph analysis require models to directly reason over the prompts describing graph topology, and are thus limited to small graphs with only a few dozens of nodes. In contrast, human experts typically write programs based on popular libraries for task solving, and can thus handle graphs with different scales. To this end, a question naturally arises: can LLMs analyze graphs like professionals?



Oracle-Efficient Differentially Private Learning with Public Data Mark Bun Department of Mathematics Department of Computer Science MIT

Neural Information Processing Systems

Due to statistical lower bounds on the learnability of many function classes under privacy constraints, there has been recent interest in leveraging public data to improve the performance of private learning algorithms. In this model, algorithms must always guarantee differential privacy with respect to the private samples while also ensuring learning guarantees when the private data distribution is sufficiently close to that of the public data. Previous work has demonstrated that when sufficient public, unlabelled data is available, private learning can be made statistically tractable, but the resulting algorithms have all been computationally inefficient. In this work, we present the first computationally efficient, algorithms to provably leverage public data to learn privately whenever a function class is learnable non-privately, where our notion of computational efficiency is with respect to the number of calls to an optimization oracle for the function class. In addition to this general result, we provide specialized algorithms with improved sample complexities in the special cases when the function class is convex or when the task is binary classification.



The Finale of "The Rehearsal" Is Outlandish and Sublime

The New Yorker

Nathan Fielder, like Andy Kaufman before him, makes performance-art comedy that does not only poke fun at the world but experimentally perturbs it, and he plies this trade in the buffer zone between reality and artifice. He presents himself as something of a Kaspar Hauser figure for the age of artificial intelligence, a foundling raised not by wolves but by an advanced and affectless race of extraterrestrial anthropologists. His object is to isolate and mimic the rudiments of human sociability. Fielder's intuition is that many putatively normal people share his own bewildered dread of everyday interactions, which are at once governed by established, if opaque, social norms and subject to unnerving unpredictability. Children learn to tame uncertainty through repetition: they replay interactions in an effort to interpret and control the varied challenges of their environment.