channel
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (4 more...)
- Research Report > Experimental Study (0.93)
- Research Report > Promising Solution (0.67)
- Health & Medicine (0.68)
- Information Technology (0.67)
- Education (0.46)
- Banking & Finance (0.46)
Algorithm1: Haarwavelettransformationpseudocode,PyTorch-like
D, demonstrating that our FreGAN is frequency-aware and can indeed produce realisticfrequencysignals. Broaderimpact. For HFD, we aggregate the high-frequency components by addingLH,HL,HH and then employ additional downsampling and convolutional layers tocompute the output scores. They are ideal for verifying the quality of the generation in low-shot scenarios. BrecaHAD9 dataset contains 162 images for breast cancer histopathological annotation and diagnosis. We evaluate the performance of our FreGAN and baseline models on more datasets with limited data amounts in Tab.1, namely, Medici, Temple, Bridge, and Wuzhen, all of which contain only 100 training images.
- Asia > China (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
4a36c3c51af11ed9f34615b81edb5bbc-Supplemental-Conference.pdf
The left panelshows the energy profile for arotation around an O-C-C-C dihedral angle. In the right panel of Figure 4, we show energy predictions along a minimum energy path of an intramolecular hydrogen transfer reaction. A.2.2 3BPADataset The 3BPA dataset contains DFT train test splits of a flexible drug-like organic molecule sampled from different temperature molecular dynamics trajectories [33]. The first step of the algorithm is to contract the generalized Clebsch-Gordan coefficients with the weights of the product basis. Then, the last dimension of cν is contracted with theAi-features' last dimension resulting in the a-tensor with correlation orderν 1.
A Method
As computing the inverse second-order derivatives is the most computation-intensive operation, we will focus on it. In Section 3.1, we use the trick of least square to compute the We can leverage the Neumann series to compute the matrix inverse. B.1 Proof of the Approximation by Implicit Gradients Here, we provide the proof for J. B.2 Proof of Theorem 3.1 Before we prove our main theorem, we prove several essential lemmas as below. Using Assumption 3.4 and 3.5 directly lead to r By Assumption 3.4, we have r By Lemma B.1 and Lemma B.2, we have r If Assumption 3.4 and 3.5 hold, then the The linear model we use is a matrix that maps the input data into a vector. LeNet model is a convolutional neural network with 4 convolutional layers and 1 fully connected layer.
CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning
Li, Xiaoya, Sun, Xiaofei, Wang, Albert, Li, Jiwei, Shum, Chris
The exponential growth in demand for GPU computing resources has created an urgent need for automated CUDA optimization strategies. While recent advances in LLMs show promise for code generation, current SOTA models achieve low success rates in improving CUDA speed. In this paper, we introduce CUDA-L1, an automated reinforcement learning framework for CUDA optimization that employs a novel contrastive RL algorithm. CUDA-L1 achieves significant performance improvements on the CUDA optimization task: trained on A100, it delivers an average speedup of x3.12 with a median speedup of x1.42 against default baselines over across all 250 CUDA kernels of KernelBench, with peak speedups reaching x120. In addition to the default baseline provided by KernelBench, CUDA-L1 demonstrates x2.77 over Torch Compile, x2.88 over Torch Compile with reduce overhead, x2.81 over CUDA Graph implementations, and remarkably x7.72 over cuDNN libraries. Furthermore, the model also demonstrates portability across different GPU architectures. Beyond these benchmark results, CUDA-L1 demonstrates several properties: it 1) discovers a variety of CUDA optimization techniques and learns to combine them strategically to achieve optimal performance; 2) uncovers fundamental principles of CUDA optimization, such as the multiplicative nature of optimizations; 3) identifies non-obvious performance bottlenecks and rejects seemingly beneficial optimizations that actually harm performance. The capabilities demonstrate that, RL can transform an initially poor-performing LLM into an effective CUDA optimizer through speedup-based reward signals alone, without human expertise or domain knowledge. This paradigm opens possibilities for automated optimization of CUDA operations, and holds promise to substantially promote GPU efficiency and alleviate the rising pressure on GPU computing resources.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (4 more...)
- Research Report > Experimental Study (0.93)
- Research Report > Promising Solution (0.67)
- Health & Medicine (0.68)
- Information Technology (0.67)
- Education (0.46)
- Banking & Finance (0.46)
A Method
As computing the inverse second-order derivatives is the most computation-intensive operation, we will focus on it. In Section 3.1, we use the trick of least square to compute the We can leverage the Neumann series to compute the matrix inverse. B.1 Proof of the Approximation by Implicit Gradients Here, we provide the proof for J. B.2 Proof of Theorem 3.1 Before we prove our main theorem, we prove several essential lemmas as below. Using Assumption 3.4 and 3.5 directly lead to r By Assumption 3.4, we have r By Lemma B.1 and Lemma B.2, we have r If Assumption 3.4 and 3.5 hold, then the The linear model we use is a matrix that maps the input data into a vector. LeNet model is a convolutional neural network with 4 convolutional layers and 1 fully connected layer.
Fidel-TS: A High-Fidelity Benchmark for Multimodal Time Series Forecasting
Xu, Zhijian, Cai, Wanxu, Dai, Xilin, Deng, Zhaorong, Xu, Qiang
The evaluation of time series forecasting models is hindered by a critical lack of high-quality benchmarks, leading to a potential illusion of progress. Existing datasets suffer from issues ranging from pre-training data contamination in the age of LLMs to the causal and description leakage prevalent in early multimodal designs. To address this, we formalize the core principles of high-fidelity benchmarking, focusing on data sourcing integrity, strict causal soundness, and structural clarity. We introduce Fidel-TS, a new large-scale benchmark built from the ground up on these principles by sourcing data from live APIs. Our extensive experiments validate this approach by exposing the critical biases and design limitations of prior benchmarks. Furthermore, we conclusively demonstrate that the causal relevance of textual information is the key factor in unlocking genuine performance gains in multimodal forecasting.
- North America > Canada > Alberta > Census Division No. 6 > Calgary Metropolitan Region > Calgary (0.14)
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia > China > Hong Kong (0.04)
- (4 more...)
- Energy > Power Industry (1.00)
- Energy > Renewable > Solar (0.46)