Goto

Collaborating Authors

 cser




94cb02feb750f20bad8a85dfe7e18d11-AuthorFeedback.pdf

Neural Information Processing Systems

The contributions of CSER put it beyond merely a "tweak". Our GRBS compressor (Section 3.3) reduces We will revise the claim by adding context. "is it using NCCL " All the algorithms use the same communication library: Horovod with NCCL. "How is the experimental setup chosen" In this work, we aim to show that we can significantly reduce the heavy inter-node communication. Applying error reset to ADAM is future work.




94cb02feb750f20bad8a85dfe7e18d11-AuthorFeedback.pdf

Neural Information Processing Systems

The contributions of CSER put it beyond merely a "tweak". Our GRBS compressor (Section 3.3) reduces We will revise the claim by adding context. "is it using NCCL " All the algorithms use the same communication library: Horovod with NCCL. "How is the experimental setup chosen" In this work, we aim to show that we can significantly reduce the heavy inter-node communication. Applying error reset to ADAM is future work.


CSER: Communication-efficient SGD with Error Reset

Neural Information Processing Systems

The scalability of Distributed Stochastic Gradient Descent (SGD) is today limited by communication bottlenecks. The key idea in CSER is first a new technique called error reset'' that adapts arbitrary compressors for SGD, producing bifurcated local models with periodic reset of resulting local residual errors. Second we introduce partial synchronization for both the gradients and the models, leveraging advantages from them. We prove the convergence of CSER for smooth non-convex problems. Empirical results show that when combined with highly aggressive compressors, the CSER algorithms accelerate the distributed training by nearly 10\times for CIFAR-100, and by 4.5\times for ImageNet.


Community of ethical hackers needed to prevent AI's looming 'crisis of trust'

#artificialintelligence

The Artificial Intelligence industry should create a global community of hackers and "threat modellers" dedicated to stress-testing the harm potential of new AI products in order to earn the trust of governments and the public before it's too late. This is one of the recommendations made by an international team of risk and machine-learning experts, led by researchers at the University of Cambridge's Centre for the Study of Existential Risk (CSER), who have authored a new "call to action" published in the journal Science. They say that companies building intelligent technologies should harness techniques such as "red team" hacking, audit trails and "bias bounties" – paying out rewards for revealing ethical flaws – to prove their integrity before releasing AI for use on the wider public. Otherwise, the industry faces a "crisis of trust" in the systems that increasingly underpin our society, as public concern continues to mount over everything from driverless cars and autonomous drones to secret social media algorithms that spread misinformation and provoke political turmoil. The novelty and "black box" nature of AI systems, and ferocious competition in the race to the marketplace, has hindered development and adoption of auditing or third party analysis, according to lead author Dr Shahar Avin of CSER.


CSER: Communication-efficient SGD with Error Reset

Xie, Cong, Zheng, Shuai, Koyejo, Oluwasanmi, Gupta, Indranil, Li, Mu, Lin, Haibin

arXiv.org Machine Learning

The scalability of Distributed Stochastic Gradient Descent (SGD) is today limited by communication bottlenecks. We propose a novel SGD variant: Communication-efficient SGD with Error Reset, or CSER. The key idea in CSER is first a new technique called "error reset" that adapts arbitrary compressors for SGD, producing bifurcated local models with periodic reset of resulting local residual errors. Second we introduce partial synchronization for both the gradients and the models, leveraging advantages from them. We prove the convergence of CSER for smooth non-convex problems. Empirical results show that when combined with highly aggressive compressors, the CSER algorithms: i) cause no loss of accuracy, and ii) accelerate the training by nearly $10\times$ for CIFAR-100, and by $4.5\times$ for ImageNet.


Functionize raises $16 million to automate software testing with AI

#artificialintelligence

Functionize, a San Jose, California-based startup developing a cloud-based platform that autonomously susses out software bugs, today announced that it has raised $16 million in series A financing contributed by Canvas Ventures. The capital infusion -- which comes after a $2.5 million seed round in February 2018 and brings the company's total raised to $18.2 million, according to Crunchbase -- will be used to "accelerate adoption" of its platform, said CEO and founder Tamas Cser. "Software testing has endured what I term a'QA winter,'" Cser, who cofounded Functionize with Ray Grieselhuber in 2015, said. "This means developers and testers still maintain tests the same way as they did in the early ages of the internet. Functionize's software as a service (SaaS) integrates with DevOps platforms like Bamboo, Jenkins, and AWS CodePipeline, and leverages natural language processing to enable developers to type out tests in plain English, which it converts into test cases.