An Algorithmic Framework for Computing Validation Performance Bounds by Using Suboptimal Models

arXiv.org Machine Learning

Practical model building processes are often time-consuming because many different models must be trained and validated. In this paper, we introduce a novel algorithm that can be used for computing the lower and the upper bounds of model validation errors without actually training the model itself. A key idea behind our algorithm is using a side information available from a suboptimal model. If a reasonably good suboptimal model is available, our algorithm can compute lower and upper bounds of many useful quantities for making inferences on the unknown target model. We demonstrate the advantage of our algorithm in the context of model selection for regularized learning problems.


How do you know if your model is going to work? Part 4: Cross-validation techniques

#artificialintelligence

In this article we conclude our four part series on basic model testing. When fitting and selecting models in a data science project, how do you know that your final model is good? And how sure are you that it's better than the models that you rejected? In this concluding Part 4 of our four part mini-series "How do you know if your model is going to work?" we demonstrate cross-validation techniques. Cross validation techniques attempt to improve statistical efficiency by repeatedly splitting data into train and test and re-performing model fit and model evaluation.


How To Fine Tune Your Machine Learning Models To Improve Forecasting Accuracy?

#artificialintelligence

Fine tuning machine learning predictive model is a crucial step to improve accuracy of the forecasted results. In the recent past, I have written a number of articles that explain how machine learning works and how to enrich and decompose the feature set to improve accuracy of your machine learning models. I am often asked a question on the techniques that can be utilised to tune the forecasting models when the features are stable and the feature set is decomposed. Once everything is tried, we should look to tune our machine learning models. This diagram illustrates how parameters can be dependent on one another.


The VIVA Method: A Life-Cycle Independent Approach to KBS Validation

AAAI Conferences

Steven A. Wells AIE Department, Lloyd's Register 29, Wellesley Road, Croydon CRO 2A J, England steve@aie.lreg.co.uk Abstract This paper describes the VIVA method; a lifecycle independent method for the validation of Knowledge-Based Systems (KBS). The method based upon the VIVA validation framework, a set of products by which a KBS development can be described. By assessing properties of these products, and properties of the links between the products, a framework for validation throughout the KBS life-cycle is defined. Introduction The VIVA 1 Technical Annex (VIVA 92) identifies the needs for a Knowledge-Based System validation method which covers the whole of the development lifecycle. These needs arise from identified problems with software-based validation, which can be summarised as follows: It is not possible to determine the validity of a system from the software alone.


Validated Retrieval in Case-Based Reasoning

AAAI Conferences

We combine simple retrieval with domain-specific validation of retrieved cases to produce a useful practical tool for case-based reasoning. Based on 200 real-world cases, we retrieve between three and six cases over a wide range of new problems. This represents a selectivity ranging from 1.5% to 3%, compared to an average selectivity of only 11% from simple retrieval alone.