In this paper, we propose the nonlinearity generation method to speed up and stabilize the training of deep convolutional neural networks. The proposed method modifies a family of activation functions as nonlinearity generators (NGs). NGs make the activation functions linear symmetric for their inputs to lower model capacity, and automatically introduce nonlinearity to enhance the capacity of the model during training. The proposed method can be considered an unusual form of regularization: the model parameters are obtained by training a relatively low-capacity model, that is relatively easy to optimize at the beginning, with only a few iterations, and these parameters are reused for the initialization of a higher-capacity model. We derive the upper and lower bounds of variance of the weight variation, and show that the initial symmetric structure of NGs helps stabilize training. We evaluate the proposed method on different frameworks of convolutional neural networks over two object recognition benchmark tasks (CIFAR-10 and CIFAR-100). Experimental results showed that the proposed method allows us to (1) speed up the convergence of training, (2) allow for less careful weight initialization, (3) improve or at least maintain the performance of the model at negligible extra computational cost, and (4) easily train a very deep model.
Emerging Semantic Web standards promise the automated discovery, composition and invocation of Web Services. Unfortunately, this vision requires that services describe themselves with large amounts of handcrafted semantic metadata. We are investigating the use of machine learning techniques for semi-automatically classifying Web Services and their messages into ontologies. From such semanticallyenriched WSDL descriptions, it is straightforward to generate significant parts of a service's description in OWLS or a similar language. In this paper, we first introduce an application for annotating Web Services that is currently under development.
To whom correspondence should be addressed; Email: email@example.com. One Sentence Summary: The meaning of concepts resides in relationships across encompassing systems that each provide a window on a shared reality. Abstract Concept induction requires the extraction and naming of concepts from noisy perceptual experience. For supervised approaches, as the number of concepts grows, so does the number of required training examples. Philosophers, psychologists, and computer scientists, have long recognized that children can learn to label objects without being explicitly taught. In a series of computational experiments, we highlight how information in the environment can be used to build and align conceptual systems. Unlike supervised learning, the learning problem becomes easier the more concepts and systems there are to master. The key insight is that each concept has a unique signature within one conceptual system (e.g., images) that is recapitulated in other systems (e.g., text or audio). As predicted, children's early concepts form readily aligned systems. A typical person can correctly recognize and name thousands of objects. However, it remains unclear what mechanism makes this feat possible.
Classification learning algorithms in general, and text classification methods in particular, tend to focus on features of individual training examples, rather than on the relationships between the examples. However, in many situations a set of items contains more information than just feature values of individual items. For example, taking into account the articles that are cited by or cite an article in question would increase our chances of correct classification. We propose to recognize and put in use generalized features (or set features), which describe a training example, but depend on the dataset as a whole, with the goal of achieving better classification accuracy. Although the idea of generalized features is consistent with the objectives of relational learning (ILP), we feel that instead of using the computationally heavy and conceptually general ILP methods, there may be a benefit in looking for approaches that use specific relations between texts, and in particular, between emails.