Not enough data to create a plot.
Try a different view from the menu above.

Plotting

Robot Talk Episode 134 – Robotics as a hobby, with Kevin McAleer

Robohub

Claire chatted to Kevin McAleer from kevsrobots about how to get started building robots at home. Kevin McAleer is a hobbyist robotics fanatic who likes to build robots, share videos about them on YouTube and teach people how to do the same. Kev has been building robots since 2019, when he got his first 3d printer and wanted to make more interesting builds. Kev has a degree in Computer Science, and because his day job is relatively hands-off, this hobby allows his creativity to have an outlet. Kev is a huge fan of Python and Micropython for embedded devices, and has a website - kevsrobots.com




Near-Optimal Edge Evaluation in Explicit Generalized Binomial Graphs

Sanjiban Choudhury, Shervin Javdani, Siddhartha Srinivasa, Sebastian Scherer

Neural Information Processing Systems

In this paper, we do so by drawing a novel equivalence between motion planning and the Bayesian active learning paradigm of decision region determination (DRD) . Unfortunately, a straight application of existing methods requires computation exponential in the number of edges in a graph.



Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems

Celestine Dünner, Thomas Parnell, Martin Jaggi

Neural Information Processing Systems

We propose a generic algorithmic building block to accelerate training of machine learning models on heterogeneous compute systems. Our scheme allows to efficiently employ compute accelerators such as GPUs and FPGAs for the training of large-scale machine learning models, when the training data exceeds their memory capacity. Also, it provides adaptivity to any system's memory hierarchy in terms of size and processing speed. Our technique is built upon novel theoretical insights regarding primal-dual coordinate methods, and uses duality gap information to dynamically decide which part of the data should be made available for fast processing. To illustrate the power of our approach we demonstrate its performance for training of generalized linear models on a large-scale dataset exceeding the memory size of a modern GPU, showing an order-of-magnitude speedup over existing approaches.