Empirical Bounds on Linear Regions of Deep Rectifier Networks

Serra, Thiago, Ramalingam, Srikumar

arXiv.org Artificial Intelligence 

One form of characterizing the expressiveness of a piecewise linear neural network is by the number of linear regions, or pieces, of the function modeled. We have observed substantial progress in this topic through lower and upper bounds on the maximum number of linear regions and a counting procedure. However, these bounds only account for the dimensions of the network and the exact counting may take a prohibitive amount of time, therefore making it infeasible to benchmark the expressiveness of networks. In addition, we present a tighter upper bound that leverages network coefficients. We test both on trained networks. The algorithm for probabilistic lower bounds is several orders of magnitude faster than exact counting and the values reach similar orders of magnitude, hence making our approach a viable method to compare the expressiveness of such networks. The refined upper bound is particularly stronger on networks with narrow layers. Neural networks with piecewise linear activations have become increasingly more common along the past decade, in particular since Nair & Hinton (2010) and Glorot et al. (2011). The simplest and most commonly used among such forms of activation is the Rectifier Linear Unit (ReLU), which outputs the maximum between 0 and its input argument (Hahnloser et al., 2000; LeCun et al., 2015). In the functions modeled by these networks, we can associate each part of the domain in which the network corresponds to an affine function with a particular set of units having positive outputs.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found