Goto

Collaborating Authors

 local testing frame


Absolute convergence and error thresholds in non-active adaptive sampling

Ferro, Manuel Vilares, Bilbao, Victor M. Darriba, Ferro, Jesús Vilares

arXiv.org Artificial Intelligence

In this sense, the operating principle for adaptive sampling is simple and involves beginning with an initial number of examples and then iteratively learning the model, evaluating it and acquiring additional observations if necessary. Accordingly, there are two questions to be considered: it is necessary to determine the training data to be acquired at each cycle, and also to define a halting condition to terminate the loop once a certain degree of performance has been achieved by the learner. Both tasks confer the character of research issues to the formalization of scheduling and stopping criteria (John and Langley, 1996), respectively. The former has been researched for decades in terms of fixed (John and Langley, 1996; Provost et al., 1999) or adaptive (Provost et al., 1999) sequencing, and it is not our objective. As regards the halting criteria, they are independent of the scheduling and mostly start from the hypothesis that learning curves are wellbehaved, including an initial steeply sloping portion, a more gently sloping middle one and a final balanced zone (Meek et al., 2002). Accordingly, the purpose is to identify the moment in which such a curve reaches a plateau, namely when adding more data instances does not improve the accuracy, although this often does not strictly verify. Instead, extra learning efforts almost always result in modest increases. This justifies the interest in having a proximity condition, understood as a measure of the degree of convergence attained from a given iteration, rather than a stopping one. In short, this will make it possible to select the level of reliability in predicting a learner's performance, both in terms of accuracy and computational costs.


Adaptive scheduling for adaptive sampling in POS taggers construction

Ferro, Manuel Vilares, Bilbao, Victor M. Darriba, Ferro, Jesús Vilares

arXiv.org Artificial Intelligence

However, managing large amounts of information is an expensive, time-consuming and non-trivial activity, especially when expert knowledge is needed. Furthermore, having access to vast data bases does not imply that ml algorithms must use them all and a subset is therefore preferred, provided it does not reduce the quality of the mined knowledge. Such observations then supply the same learning power with far less computational cost and allow the training process to be speeded up, whilst their nature and optimal size are rarely obvious. This justifies the interest of developing efficient sampling techniques, which involves anticipating the link between performance and experience regarding the accuracy of the system we are generating. At this point, correctness with respect to the working hypotheses and robustness against changes to them should be guaranteed in order to supply a practical solution. The former ensures the effectiveness of the proposed strategy in the framework considered, while the latter enables fluctuations in the learning conditions to be assimilated without compromising correctness, thus providing reliability to our calculations. An area of work that is particularly sensitive to these inconveniences is natural language processing (nlp), the components of which are increasingly based on ml [3, 50].


Early stopping by correlating online indicators in neural networks

Ferro, Manuel Vilares, Mosquera, Yerai Doval, Pena, Francisco J. Ribadas, Bilbao, Victor M. Darriba

arXiv.org Artificial Intelligence

In order to minimize the generalization error in neural networks, a novel technique to identify overfitting phenomena when training the learner is formally introduced. This enables support of a reliable and trustworthy early stopping condition, thus improving the predictive power of that type of modeling. Our proposal exploits the correlation over time in a collection of online indicators, namely characteristic functions for indicating if a set of hypotheses are met, associated with a range of independent stopping conditions built from a canary judgment to evaluate the presence of overfitting. That way, we provide a formal basis for decision making in terms of interrupting the learning process. As opposed to previous approaches focused on a single criterion, we take advantage of subsidiarities between independent assessments, thus seeking both a wider operating range and greater diagnostic reliability. With a view to illustrating the effectiveness of the halting condition described, we choose to work in the sphere of natural language processing, an operational continuum increasingly based on machine learning. As a case study, we focus on parser generation, one of the most demanding and complex tasks in the domain. The selection of cross-validation as a canary function enables an actual comparison with the most representative early stopping conditions based on overfitting identification, pointing to a promising start toward an optimal bias and variance control.