Supplemental: Training Neural Networks is NP-Hard in Fixed Dimension A Detailed Proof of NP-Hardness for Two Dimensions-axis (with x 1 = 0, we call this vertical line h

Neural Information Processing Systems 

In this section we provide the omitted details to prove Theorem 1. We start by describing the precise positions of the data points in the selection gadget. Next, we need a small ɛ > 0 to be chosen later in a global context. With the precise description of the selection gadget at hand, we can proceed to proving Lemma 4. Proof of Lemma 4. First, we focus on the three vertical lines h For the following argument, compare Figure 5. Observe that f restricted to one of the three lines is a one-dimensional, continuous, piecewise linear function with at most four breakpoints. Note that the exact location of these breakpoints and the slope in the sloped segments is not implied by the nine data points considered so far.