id3
9edcc1391c208ba0b503fe9a22574251-Supplemental.pdf
Let m 2 and a,b A such that a is ordered before b in tie-breaking. BR steps will therefore alternate whether they are taken by agents represented in Id(a)(P) or Id(b)(P). Agents from the former set will best-respond to rankings whose top preference is a, changing the winner to a, whereas agents from the latter set will best-respond to rankings whose top preference is b, changing the winner back to b. Inverse reasoning holds if a and b differ by one initial plurality score and sP(a) = sP(b) 1, implyingr(P0) = b. Without loss of generality let W = {1,2} and suppose u2 > um, since the case where u2 = um is covered in [Brˆanzei et al., 2013]. We believe this proof is challenging due to the dependence in agents' rankings once we condition on profiles that satisfy two-way ties (i.e.
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Hungary > Budapest > Budapest (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
On the Optimality of Trees Generated by ID3
Brutzkus, Alon, Daniely, Amit, Malach, Eran
Since its inception in the 1980s, ID3 has become one of the most successful and widely used algorithms for learning decision trees. However, its theoretical properties remain poorly understood. In this work, we analyze the heuristic of growing a decision tree with ID3 for a limited number of iterations $t$ and given that nodes are split as in the case of exact information gain and probability computations. In several settings, we provide theoretical and empirical evidence that the TopDown variant of ID3, introduced by Kearns and Mansour (1996), produces trees with optimal or near-optimal test error among all trees with $t$ internal nodes. We prove optimality in the case of learning conjunctions under product distributions and learning read-once DNFs with 2 terms under the uniform distribition. Using efficient dynamic programming algorithms, we empirically show that TopDown generates trees that are near-optimal ($\sim \%1$ difference from optimal test error) in a large number of settings for learning read-once DNFs under product distributions.
- Oceania > New Zealand > North Island > Waikato (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)