Goto

Collaborating Authors

 resnet34





Appendix for " Residual Alignment: Uncovering the Mechanisms of Residual Networks " Anonymous Author(s) Affiliation Address email

Neural Information Processing Systems

We start by providing motivation for the unconstrained Jacobians problem introduced in the main text. We will continue our proof using contradiction. Figure 1: Fully-connected ResNet34 (Type 1 model) trained on MNIST.Figure 2: Fully-connected ResNet34 (Type 1 model) trained on FashionMNIST. Figure 10: Fully-connected ResNet34 (Type 1 model) trained on MNIST. Figure 24: Fully-connected ResNet34 (Type 1 model) trained on MNIST.




Research on Brain Tumor Classification Method Based on Improved ResNet34 Network

Li, Yufeng, Zhao, Wenchao, Dang, Bo, Wang, Weimin

arXiv.org Artificial Intelligence

Previously, image interpretation in radiology relied heavily on manual methods. However, manual classification of brain tumor medical images is time-consuming and labor-intensive. Even with shallow convolutional neural network models, the accuracy is not ideal. To improve the efficiency and accuracy of brain tumor image classification, this paper proposes a brain tumor classification model based on an improved ResNet34 network. This model uses the ResNet34 residual network as the backbone network and incorporates multi-scale feature extraction. It uses a multi-scale input module as the first layer of the ResNet34 network and an Inception v2 module as the residual downsampling layer. Furthermore, a channel attention mechanism module assigns different weights to different channels of the image from a channel domain perspective, obtaining more important feature information. The results after a five-fold crossover experiment show that the average classification accuracy of the improved network model is approximately 98.8%, which is not only 1% higher than ResNet34, but also only 80% of the number of parameters of the original model. Therefore, the improved network model not only improves accuracy but also reduces clutter, achieving a classification effect with fewer parameters and higher accuracy.


CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation (Supplementary Material)

Neural Information Processing Systems

The supplementary material consists of the following. Results are reported in Tables 2 and 3 Discussion on Limitations and Societal Impacts. The architecture of the network is similar to [2]. We perform all our experiments on Nivida Titan X GPU . We used the data splits released by [1] for experimentation.



DYNAMIX: RL-based Adaptive Batch Size Optimization in Distributed Machine Learning Systems

Dai, Yuanjun, He, Keqiang, Wang, An

arXiv.org Artificial Intelligence

Abstract--Existing batch size selection approaches in distributed machine learning rely on static allocation or simplistic heuristics that fail to adapt to heterogeneous, dynamic computing environments. We present DYNAMIX, a reinforcement learning framework that formulates batch size optimization as a sequential decision-making problem using Proximal Policy Optimization (PPO). Our approach employs a multi-dimensional state representation encompassing network-level metrics, system-level resource utilization, and training statistical efficiency indicators to enable informed decision-making across diverse computational resources. Our approach eliminates the need for explicit system modeling while integrating seamlessly with existing distributed training frameworks. Through evaluations across diverse workloads, hardware configurations, and network conditions, DY - NAMIX achieves up to 6.3% improvement in the final model accuracy and 46% reduction in the total training time. Our scalability experiments demonstrate that DYNAMIX maintains the best performance as cluster size increases to 32 nodes, while policy transfer experiments show that learned policies generalize effectively across related model architectures. Distributed machine learning (DML) has emerged as the predominant paradigm for training increasingly complex models on expansive datasets. As model architectures grow in parameter count and computational demands, practitioners increasingly rely on distributed training across multiple computational nodes to maintain feasible training timelines. Within this paradigm, batch size selection represents a critical hy-perparameter that significantly influences both training efficiency and model convergence properties. While larger batch sizes generally improve hardware utilization through increased parallelism, they may adversely affect statistical efficiency, potentially degrading convergence rates and generalization performance [19], [32]. The optimization complexity intensifies substantially in heterogeneous distributed environments, characterized by variance in computational capabilities, network characteristics, and hardware specifications across training nodes. These heterogeneous configurations arise from several practical considerations: cost optimization through spot instance utilization [12], consolidation of diverse hardware generations within organizational clusters [13], and workload deployment in multi-tenant infrastructure [15]. Under such conditions, the conventional approach of uniform batch size allocation frequently leads to suboptimal resource utilization, as demonstrated by Jia et al. [16], who observed significant throughput degradation due to synchronization barriers in heterogeneous clusters. Existing approaches to batch size optimization in distributed environments fall into several distinct categories, each exhibiting particular limitations.