Generalization Abilities of Cascade Network Architecture

Littmann, E., Ritter, H.

Neural Information Processing Systems 

In [5], a new incremental cascade network architecture has been presented. This paper discusses the properties of such cascade networks and investigates their generalization abilities under the particular constraint of small data sets. The evaluation is done for cascade networks consisting of local linear maps using the Mackey Glass time series prediction task as a benchmark. Our results indicate that to bring the potential of large networks to bear on the problem of ning extracting information from small data sets without run the risk of overjitting, deeply cascaded network architectures are more favorable than shallow broad architectures that contain the same number of nodes. 1 Introduction For many real-world applications, a major constraint for the successful learning from examples is the limited number of examples available. Thus, methods are required, that can learn from small data sets. This constraint makes the problem of generalization particularly hard.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found