An Improved Oscillating-Error Classifier with Branching

Greer, Kieran

arXiv.org Machine Learning 

This paper extends the earlier work, based on an oscillating error correction technique [7]. The method uses an error correction update that includes a very simple rule, of either adding or subtracting the error adjustment, based on whether the variable value is currently larger or smaller than the desired value. This has relations with cellular automata [2], where the small add or subtract decision gives the classifier an added dimension of flexibility. The results reported in the first paper were unusually good over a wide range of datasets and it was subsequently found that an error had been made in how the classifier decides on the correct output category. The earlier paper measured the error amount between the desired output value and the value produced by the corresponding classifier. If the error was small enough, then the classification was considered to be correct. When training the classifier, the data rows for each category would be put together and averaged. The classifier would then try to learn these average values, but that would lead to distinct weight sets for each output category. It was overlooked that even if the desired output category correctly classified the input data row, one of the other category weight sets could produce an even smaller error.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found