Goto

Collaborating Authors

 asymptotic backbone


Absolute convergence and error thresholds in non-active adaptive sampling

Ferro, Manuel Vilares, Bilbao, Victor M. Darriba, Ferro, Jesús Vilares

arXiv.org Artificial Intelligence

In this sense, the operating principle for adaptive sampling is simple and involves beginning with an initial number of examples and then iteratively learning the model, evaluating it and acquiring additional observations if necessary. Accordingly, there are two questions to be considered: it is necessary to determine the training data to be acquired at each cycle, and also to define a halting condition to terminate the loop once a certain degree of performance has been achieved by the learner. Both tasks confer the character of research issues to the formalization of scheduling and stopping criteria (John and Langley, 1996), respectively. The former has been researched for decades in terms of fixed (John and Langley, 1996; Provost et al., 1999) or adaptive (Provost et al., 1999) sequencing, and it is not our objective. As regards the halting criteria, they are independent of the scheduling and mostly start from the hypothesis that learning curves are wellbehaved, including an initial steeply sloping portion, a more gently sloping middle one and a final balanced zone (Meek et al., 2002). Accordingly, the purpose is to identify the moment in which such a curve reaches a plateau, namely when adding more data instances does not improve the accuracy, although this often does not strictly verify. Instead, extra learning efforts almost always result in modest increases. This justifies the interest in having a proximity condition, understood as a measure of the degree of convergence attained from a given iteration, rather than a stopping one. In short, this will make it possible to select the level of reliability in predicting a learner's performance, both in terms of accuracy and computational costs.


Adaptive scheduling for adaptive sampling in POS taggers construction

Ferro, Manuel Vilares, Bilbao, Victor M. Darriba, Ferro, Jesús Vilares

arXiv.org Artificial Intelligence

However, managing large amounts of information is an expensive, time-consuming and non-trivial activity, especially when expert knowledge is needed. Furthermore, having access to vast data bases does not imply that ml algorithms must use them all and a subset is therefore preferred, provided it does not reduce the quality of the mined knowledge. Such observations then supply the same learning power with far less computational cost and allow the training process to be speeded up, whilst their nature and optimal size are rarely obvious. This justifies the interest of developing efficient sampling techniques, which involves anticipating the link between performance and experience regarding the accuracy of the system we are generating. At this point, correctness with respect to the working hypotheses and robustness against changes to them should be guaranteed in order to supply a practical solution. The former ensures the effectiveness of the proposed strategy in the framework considered, while the latter enables fluctuations in the learning conditions to be assimilated without compromising correctness, thus providing reliability to our calculations. An area of work that is particularly sensitive to these inconveniences is natural language processing (nlp), the components of which are increasingly based on ml [3, 50].


Modeling of learning curves with applications to pos tagging

Ferro, Manuel Vilares, Bilbao, Victor M. Darriba, Pena, Francisco J. Ribadas

arXiv.org Artificial Intelligence

An algorithm to estimate the evolution of learning curves on the whole of a training data base, based on the results obtained from a portion and using a functional strategy, is introduced. We approximate iteratively the sought value at the desired time, independently of the learning technique used and once a point in the process, called prediction level, has been passed. The proposal proves to be formally correct with respect to our working hypotheses and includes a reliable proximity condition. This allows the user to fix a convergence threshold with respect to the accuracy finally achievable, which extends the concept of stopping criterion and seems to be effective even in the presence of distorting observations. Our aim is to evaluate the training effort, supporting decision making in order to reduce the need for both human and computational resources during the learning process. The proposal is of interest in at least three operational procedures. The first is the anticipation of accuracy gain, with the purpose of measuring how much work is needed to achieve a certain degree of performance. The second relates the comparison of efficiency between systems at training time, with the objective of completing this task only for the one that best suits our requirements. The prediction of accuracy is also a valuable item of information for customizing systems, since we can estimate in advance the impact of settings on both the performance and the development costs. Using the generation of part-of-speech taggers as an example application, the experimental results are consistent with our expectations.