Feed-Forward Neural Networks as a Mixed-Integer Program
Aftabi, Navid, Moradi, Nima, Mahroo, Fatemeh
–arXiv.org Artificial Intelligence
Deep neural networks (DNNs) are widely studied in various applications. A DNN consists of layers of neurons that compute affine combinations, apply nonlinear operations, and produce corresponding activations. The rectified linear unit (ReLU) is a typical nonlinear operator, outputting the max of its input and zero. In scenarios like max pooling, where multiple input values are involved, a fixed-parameter DNN can be modeled as a mixed-integer program (MIP). This formulation, with continuous variables representing unit outputs and binary variables for ReLU activation, finds applications across diverse domains. This study explores the formulation of trained ReLU neurons as MIP and applies MIP models for training neural networks (NNs). Specifically, it investigates interactions between MIP techniques and various NN architectures, including binary DNNs (employing step activation functions) and binarized DNNs (with weights and activations limited to $-1,0,+1$). The research focuses on training and evaluating proposed approaches through experiments on handwritten digit classification models. The comparative study assesses the performance of trained ReLU NNs, shedding light on the effectiveness of MIP formulations in enhancing training processes for NNs.
arXiv.org Artificial Intelligence
Feb-8-2024
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > Canada
- Asia > Middle East
- Genre:
- Overview (0.67)
- Research Report (0.64)
- Technology: