Goto

Collaborating Authors

 cond



Robust low-rank training via approximate orthonormal constraints

Neural Information Processing Systems

By modeling robustness in terms of the condition number of the neural network, we argue that this loss of robustness is due to the exploding singular values of the low-rank weight matrices.



Less is More: Data-Efficient Adaptation for Controllable Text-to-Video Generation

Cheng, Shihan, Kulkarni, Nilesh, Hyde, David, Smirnov, Dmitriy

arXiv.org Artificial Intelligence

Fine-tuning large-scale text-to-video diffusion models to add new generative controls, such as those over physical camera parameters (e.g., shutter speed or aperture), typically requires vast, high-fidelity datasets that are difficult to acquire. In this work, we propose a data-efficient fine-tuning strategy that learns these controls from sparse, low-quality synthetic data. W e show that not only does fine-tuning on such simple data enable the desired controls, it actually yields superior results to models fine-tuned on pho-torealistic "real" data. Beyond demonstrating these results, we provide a framework that justifies this phenomenon both intuitively and quantitatively.


Who is Afraid of Minimal Revision?

Baccini, Edoardo, Christoff, Zoé, Gierasimczuk, Nina, Verbrugge, Rineke

arXiv.org Artificial Intelligence

The principle of minimal change in belief revision theory requires that, when accepting new information, one keeps one's belief state as close to the initial belief state as possible. This is precisely what the method known as minimal revision does. However, unlike less conservative belief revision methods, minimal revision falls short in learning power: It cannot learn everything that can be learned by other learning methods. We begin by showing that, despite this limitation, minimal revision is still a successful learning method in a wide range of situations. Firstly, it can learn any problem that is finitely identifiable. Secondly, it can learn with positive and negative data, as long as one considers finitely many possibilities. We then characterize the prior plausibility assignments (over finitely many possibilities) that enable one to learn via minimal revision, and do the same for conditioning and lexicographic upgrade. Finally, we show that not all of our results still hold when learning from possibly erroneous information.


discussion or experiments with small problems to illustrate the limitations of the approach, which we agree is a good

Neural Information Processing Systems

We thank the reviewers for their comments. We will address all style suggestions and minor points. We encoded the T -maze domain in RDDL and show the results in the table. RDDL to pomdpx translator failed for this problem size. The result shows that SNAP can solve the T -maze problems. However, we can illustrate the tradeoff with two other simple domains.