Consider a report released this week by the highly respected, nonpartisan Rand Corp. The report wasn't about Trump; indeed, he is mentioned only once in nearly 300 pages of text. But it suggests, ominously, that we are living in a period in which the line between fact and fiction is being dangerously muddied. Using examples like the large numbers of Americans who don't believe the scientific consensus on the safety of GMO foods or vaccines or the existence of human-caused climate change -- and focusing as well on the increasing distrust of formerly respected sources of factual information -- the report concludes that what it calls "truth decay" poses a direct threat to democracy. Among the consequences cited by the Rand scholars: the erosion of political and civil discourse, political paralysis at the federal and state level, and increased risk of individual disengagement from political and civic life.
We see so couple of well-liked science books on laptop or computer science, specifically outside of crypto and theory. Pedro Domingos' The Grasp Algorithm: How the Quest for the Supreme Mastering Equipment Will Remake the Entire world, in spite of the hyped title and prologue, does a great position offering the landscape of device understanding algorithms and putting them in a typical text from their philosophical underpinnings to the models that they construct on, all in a typically non-technological way. Operating out from the interior ring are the representations of the models, how we evaluate goodness, the most important device to improve the product and the philosophies that drove that product. In the bullseye you can see the "Grasp Equation" or the Grasp Algorithm, a person understanding algorithm to rule them all. The quest for such an algorithm drives the reserve, and Domingos describes his own, admittedly constrained tries, in the direction of achieving that target.
As the Chief Technical Architect of the Shadow Robot Company, I spend a lot of time thinking about grasping things with our robots. This story is a quick delve into the world of grasp robustness prediction using machine learning. First of all, why focus on this? There are currently much more exciting projects using deep learning for robotics. For example, the work done by Ken Goldberg and his team at UC Berkeley on DexNet is very impressive.
Robot grasping is often formulated as a learning problem. With the increasing speed and quality of physics simulations, generating large-scale grasping data sets that feed learning algorithms is becoming more and more popular. An often overlooked question is how to generate the grasps that make up these data sets. In this paper, we review, classify, and compare different grasp sampling strategies. Our evaluation is based on a fine-grained discretization of SE(3) and uses physics-based simulation to evaluate the quality and robustness of the corresponding parallel-jaw grasps. Specifically, we consider more than 1 billion grasps for each of the 21 objects from the YCB data set. This dense data set lets us evaluate existing sampling schemes w.r.t. their bias and efficiency. Our experiments show that some popular sampling schemes contain significant bias and do not cover all possible ways an object can be grasped.
For a robot to perform complex manipulation tasks, it is necessary for it to have a good grasping ability. However, vision based robotic grasp detection is hindered by the unavailability of sufficient labelled data. Furthermore, the application of semi-supervised learning techniques to grasp detection is under-explored. In this paper, a semi-supervised learning based grasp detection approach has been presented, which models a discrete latent space using a Vector Quantized Variational AutoEncoder (VQ-VAE). To the best of our knowledge, this is the first time a Variational AutoEncoder (VAE) has been applied in the domain of robotic grasp detection. The VAE helps the model in generalizing beyond the Cornell Grasping Dataset (CGD) despite having a limited amount of labelled data by also utilizing the unlabelled data. This claim has been validated by testing the model on images, which are not available in the CGD. Along with this, we augment the Generative Grasping Convolutional Neural Network (GGCNN) architecture with the decoder structure used in the VQ-VAE model with the intuition that it should help to regress in the vector-quantized latent space. Subsequently, the model performs significantly better than the existing approaches which do not make use of unlabelled images to improve the grasp.