Convergence of Large Margin Separable Linear Classification

Zhang, Tong

Neural Information Processing Systems 

Large margin linear classification methods have been successfully applied tomany applications. For a linearly separable problem, it is known that under appropriate assumptions, the expected misclassification error of the computed "optimal hyperplane" approaches zero at a rate proportional tothe inverse training sample size. This rate is usually characterized bythe margin and the maximum norm of the input data. In this paper, we argue that another quantity, namely the robustness of the input datadistribution, also plays an important role in characterizing the convergence behavior of expected misclassification error. Based on this concept of robustness, we show that for a large margin separable linear classification problem, the expected misclassification error may converge exponentially in the number of training sample size. 1 Introduction We consider the binary classification problem: to determine a label y E {-1, 1} associated withan input vector x. A useful method for solving this problem is by using linear discriminant functions .

Similar Docs  Excel Report  more

TitleSimilaritySource
None found