Goto

Collaborating Authors

 dilation



Orthogonium : A Unified, Efficient Library of Orthogonal and 1-Lipschitz Building Blocks

Boissin, Thibaut, Mamalet, Franck, Lafargue, Valentin, Serrurier, Mathieu

arXiv.org Machine Learning

Orthogonal and 1-Lipschitz neural network layers are essential building blocks in robust deep learning architectures, crucial for certified adversarial robustness, stable generative models, and reliable recurrent networks. Despite significant advancements, existing implementations remain fragmented, limited, and computationally demanding. To address these issues, we introduce Orthogonium , a unified, efficient, and comprehensive PyTorch library providing orthogonal and 1-Lipschitz layers. Orthogonium provides access to standard convolution features-including support for strides, dilation, grouping, and transposed-while maintaining strict mathematical guarantees. Its optimized implementations reduce overhead on large scale benchmarks such as ImageNet. Moreover, rigorous testing within the library has uncovered critical errors in existing implementations, emphasizing the importance of standardized and reliable tools. Orthogonium thus significantly lowers adoption barriers, enabling scalable experimentation and integration across diverse applications requiring orthogonality and robust Lipschitz constraints. Orthogonium is available at https://github.com/deel-ai/orthogonium.


The Station: An Open-World Environment for AI-Driven Discovery

Chung, Stephen, Du, Wenyu

arXiv.org Artificial Intelligence

We introduce the STATION, an open-world multi-agent environment for autonomous scientific discovery. The Station simulates a complete scientific ecosystem, where agents can engage in long scientific journeys that include reading papers from peers, formulating hypotheses, collaborating with peers, submitting experiments, and publishing results. Importantly, there is no centralized system coordinating their activities. Utilizing their long context, agents are free to choose their own actions and develop their own narratives within the Station. Experiments demonstrate that AI agents in the Station achieve new state-of-the-art performance on a wide range of benchmarks, spanning mathematics, computational biology, and machine learning, notably surpassing AlphaEvolve in circle packing. A rich tapestry of unscripted narratives emerges, such as agents collaborating and analyzing other works rather than pursuing myopic optimization. From these emergent narratives, novel methods arise organically, such as a new density-adaptive algorithm for scRNA-seq batch integration that borrows concepts from another domain. The Station marks a first step towards autonomous scientific discovery driven by emergent behavior in an open-world environment, representing a new paradigm that moves beyond rigid pipelines.




Spatially Sparse Inference for Generative Image Editing 352 Supplementary Material 353 A Additional Implementation Details

Neural Information Processing Systems

We omit the element-wise operations for simplicity and follow the notations in Section 3 . As mentioned in Section 3.2, we fuse Note that the pre-computation is cheap and only needs to be once for each resolution. We elaborate more details on how we build the synthetic editing dataset. Figure 7 (a) shows some examples of our synthetic editing on LSUN Church. The detailed distribution is shown in Figure 8a .




A Background on unbalanced optimal transport

Neural Information Processing Systems

The conic formulation detailed in Section A.3 is obtained by performing the optimal transport on ( x, 0) Note that Liero et al. [2015] do not mention that this The proofs are detailed in Liero et al. [2015]. We first start with the existence of minimizers stated in Proposition 1. Thus it suffices to have relative compactness of the set of minimizers. There exists a Borel measurable bijection between the measures' supports It is the same proof as in the main body. We present in this section the proofs of the properties mentioned in Section 2. We refer to Section 2 In this section we frequently use the notion of marginal for neasures. We present in this section concepts and properties which are necessary for the proof of Theorem 1.


Vision GNN: An Image is Worth Graph of Nodes Kai Han 1,2 Yunhe Wang

Neural Information Processing Systems

Given a FFN module, the diversity γ (FFN (X)) of its output features satisfies γ ( FFN( X)) λγ ( X), (2) where λ is the Lipschitz constant of FFN with respect to p-norm for p [1, ].