linear classification
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.68)
LinearandKernelClassificationintheStreaming Model: ImprovedBoundsforHeavyHitters
We consider logistic regression, and more generally, linear classification, in the streaming model. In our setting, we are given a dataset consisting ofT examples (xt,yt), where t [T], xt Rd, yt { 1,1}. The examples arrive one by one, and moreover, the nonzero coordinates of each examplext arrive one by one.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- North America > Canada > Quebec > Montreal (0.05)
- North America > United States > Texas > Harris County > Houston (0.04)
- (7 more...)
- Research Report > New Finding (0.34)
- Research Report > Experimental Study (0.34)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > California > Alameda County > Berkeley (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (4 more...)
- North America > United States > Ohio > Franklin County > Columbus (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.96)
- Information Technology > Data Science > Data Mining (0.94)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (0.66)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.68)
A Stability-based Validation Procedure for Differentially Private Machine Learning
Kamalika Chaudhuri, Staal A. Vinterbo
Differential privacy is a cryptographically motivated definition of privacy which has gained considerable attention in the algorithms, machine-learning and data-mining communities. While there has been an explosion of work on differentially private machine learning algorithms, a major barrier to achieving end-to-end differential privacy in practical machine learning applications is the lack of an effective procedure for differentially private parameter tuning, or, determining the parameter value, such as a bin size in a histogram, or a regularization parameter, that is suitable for a particular application. In this paper, we introduce a generic validation procedure for differentially private machine learning algorithms that apply when a certain stability condition holds on the training algorithm and the validation performance metric. The training data size and the privacy budget used for training in our procedure is independent of the number of parameter values searched over. We apply our generic procedure to two fundamental tasks in statistics and machine-learning - training a regularized linear classifier and building a histogram density estimator that result in end-to-end differentially private solutions for these problems.
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > United States > California > San Diego County > La Jolla (0.04)
- South America > Paraguay > Asunción > Asunción (0.04)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > California > Alameda County > Berkeley (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (4 more...)
- Europe > France > Auvergne-Rhône-Alpes > Isère > Grenoble (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)