Goto

Collaborating Authors

FilterBoost: Regression and Classification on Large Datasets

Neural Information Processing Systems

We study boosting in the filtering setting, where the booster draws examples from an oracle instead of using a fixed training set and so may train efficiently on very large datasets. Our algorithm, which is based on a logistic regression technique proposed by Collins, Schapire, & Singer, requires fewer assumptions to achieve bounds equivalent to or better than previous work. Moreover, we give the first proof that the algorithm of Collins et al. is a strong PAC learner, albeit within the filtering setting. Our proofs demonstrate the algorithm's strong theoretical properties for both classification and conditional probability estimation, and we validate these results through extensive experiments. Empirically, our algorithm proves more robust to noise and overfitting than batch boosters in conditional probability estimation and proves competitive in classification.


FilterBoost: Regression and Classification on Large Datasets

Neural Information Processing Systems

We study boosting in the filtering setting, where the booster draws examples from an oracle instead of using a fixed training set and so may train efficiently on very large datasets. Our algorithm, which is based on a logistic regression technique proposed by Collins, Schapire, & Singer, requires fewer assumptions to achieve bounds equivalent to or better than previous work. Moreover, we give the first proof that the algorithm of Collins et al. is a strong PAC learner, albeit within the filtering setting. Our proofs demonstrate the algorithm's strong theoretical properties forboth classification and conditional probability estimation, and we validate these results through extensive experiments. Empirically, our algorithm proves more robust to noise and overfitting than batch boosters in conditional probability estimation and proves competitive in classification.


Algorithms and hardness results for parallel large margin learning

Neural Information Processing Systems

We study the fundamental problem of learning an unknown large-margin halfspace in the context of parallel computation. Our main positive result is a parallel algorithm for learning a large-margin halfspace that is based on interior point methods from convex optimization and fast parallel algorithms for matrix computations. We show that this algorithm learns an unknown gamma-margin halfspace over n dimensions using poly(n,1/gamma) processors and runs in time ~O(1/gamma) + O(log n). In contrast, naive parallel algorithms that learn a gamma-margin halfspace in time that depends polylogarithmically on n have Omega(1/gamma^2) runtime dependence on gamma. Our main negative result deals with boosting, which is a standard approach to learning large-margin halfspaces. We give an information-theoretic proof that in the original PAC framework, in which a weak learning algorithm is provided as an oracle that is called by the booster, boosting cannot be parallelized: the ability to call the weak learner multiple times in parallel within a single boosting stage does not reduce the overall number of successive stages of boosting that are required.


Covid: JCVI scientists to announce decision on booster rollout

BBC News

The committee only advises the government and the final decision on measures to combat Covid always lies with the politicians - but Prime Minister Boris Johnson has said he intends to wait for the recommendations from the scientific experts who make up the committee.


FDA Panel Decision on COVID Boosters "Not the End of the Story," Fauci Says

Mother Jones

On Friday, an independent advisory panel for the Food and Drug Administration voted against recommending a Pfizer booster shot for the general public. It did, however, vote to recommend that those 65 and older and at risk of severe disease be eligible for a third dose of the COVID vaccine. Top health official Dr. Anthony Fauci, a proponent of the wide use of boosters, responded to the decision on Sunday, saying it "is not the end of the story." "I don't think they made a mistake," Fauci told CNN's Jake Tapper on State of the Union, "but the one thing I think people need to realize: That data are coming in, literally on a daily and weekly basis…They're going to continue to look at this literally in real-time." Dr. Anthony Fauci to @jaketapper: The FDA advisory committee did not make a mistake on the booster shot recommendations and the FDA "absolutely should not ignore them."