manipulation


Multiple helical magnetic soft robots carry us closer to understanding collective behaviors

#artificialintelligence

Magnetic soft robots are a promising option for contactless control in confined environments via external magnetic stimuli. Magneto-induced motions, i.e., magnetomotility, are driven by local deformation of a robot whereby particle alignments and alternating polar distributions are programmed into the body. Attempts to program magnetic anisotropy into the soft robots have been performed through direct laser printing (DLP), stereolithography (SLA) and fused filament fabrication (FDM) combined with multi-axial manipulation of electromagnets. Now, researchers have demonstrated facile preparation and actuation methods of magnetic soft robots without electromagnetic regulation. They constructed a three-dimensional helical soft robot through twisting of a two-dimensional polymer composite film.


Reusable neural skill embeddings for vision-guided whole body movement and object manipulation

#artificialintelligence

Both in simulation settings and robotics, there is an ambition to produce flexible control systems that can enable complex bodies to perform dynamic locomotion and natural object manipulation. In previous work, we developed a framework to train locomotor skills and reuse these skills for whole-body visuomotor tasks. Here, we extend this line of work to tasks involving whole body movement as well as visually guided manipulation of objects. This setting poses novel challenges in terms of task specification, exploration, and generalization. We develop an integrated approach consisting of a flexible motor primitive module, demonstrations, an instructed training regime as well as curricula in the form of task variations.


Adobe's Experimental New Features Promise a Future Where Nothing's Real

#artificialintelligence

Adobe Max 2019 wrapped up yesterday, and over the past week the company (and host John Mulaney) revealed a bunch of new automated capabilities, it's currently developing for its various applications--both on desktop and mobile. These demos are always crowd-pleasers and tantalizing teases of how users might soon be able to further streamline their workflows. But in recent years these sneak peeks have also provided a look at how artificial intelligence promises to radically change all the digital tools we use, as more often than not, Adobe's latest and greatest leverage the company's Sensei deep learning platform to pull off their seemingly magical feats. Not to be mistaken with the classic children's toy where plastic pegs were stabbed into a glowing board, Adobe LightRight might be the holy grail for photographers who incessantly tweak and adjust every aspect of their photos in apps like Adobe Lightroom. Using Adobe Sensei, LightRight can be used to radically adjust the lighting in a photo after it was taken, and not just in regards to the overall exposure or brightness.


Machine Learning on Autonomous Database: A Practical Example

#artificialintelligence

The dataset used for building a network intrusion detection classifier is the classic KDD you can download here, released as first version in the 1999 KDD Cup, with 125.973 records in the training set. It was built for DARPA Intrusion Detection Evaluation Program by MIT Lincoln Laboratory. The dataset is already split into training and test dataset. The sub-classes into training dataset are 22 for attacks, and one "normal" for traffic allowed. The list of attacks and the associations with the four categories reported above is hold in this file.


How to use NVIDIA GPUs for Machine Learning with the new Data Science PC from Maingear

#artificialintelligence

Deep Learning enables us to perform many human-like tasks, but if you're a data scientist and you don't work in a FAANG company (or if you're not developing the next AI startup) chances are that you still use good and old (ok, maybe not that old) Machine Learning to perform your daily tasks. One characteristic of Deep Learning is that it's very computationally intensive, so all the main DL libraries make use of GPUs to improve the processing speed. But if you ever felt left out of the party because you don't work with Deep Learning, those days are over: with the RAPIDS suite of libraries now we can run our data science and analytics pipelines entirely on GPUs. In this article we're going to talk about some of these RAPIDS libraries and get to know a little more about the new Data Science PC from Maingear. Generally speaking, GPUs are fast because they have high-bandwidth memories and hardware that performs floating-point arithmetic at significantly higher rates than conventional CPUs [1].


Modzy is a marketplace for enterprise AI software and models

#artificialintelligence

Companies are trying to find the most efficient ways to use machine learning models in their businesses. Modzy, a product of Booz Allen, is designed to make that process easier and safer for enterprises. Seth Clark, senior associate at Booz Allen, described Modzy in an interview with VentureBeat as a machine learning operationalization platform. More colloquially, it's an online store where you can find and acquire AI tools. "What we're trying to do is help organizations get AI capabilities out of the lab, off of someone's laptop, and into a production system," said Clark.


Big data means big opportunities for criminals - this is how to stop them

#artificialintelligence

As technology rapidly progresses, doomsday stories emerge just as quickly. Dissident voices, however, are trying to spread alternative messages. In the past century, increasingly efficient technology and the advancement of knowledge have addressed global challenges including global poverty, deaths from violent crime, childhood mortality, preventable diseases and human life expectancy - at a scale never seen before. For obvious reasons, such breakthroughs can be considered among humanity's greatest achievements; yet we also see how technology facilitates online disinformation, global cyberattacks and unprecedented terrorist media campaigns inspiring thousands of would-be terrorists. "Technology is a force that takes what was once scarce and makes it abundant," write Peter Diamantis and Steven Kotler in Abundance: The Future is Better than You Think.


Learning Data Manipulation for Augmentation and Weighting

arXiv.org Machine Learning

Manipulating data, such as weighting data examples or augmenting with new instances, has been increasingly used to improve model training. Previous work has studied various rule- or learning-based approaches designed for specific types of data manipulation. In this work, we propose a new method that supports learning different manipulation schemes with the same gradient-based algorithm. Our approach builds upon a recent connection of supervised learning and reinforcement learning (RL), and adapts an off-the-shelf reward learning algorithm from RL for joint data manipulation learning and model training. Different parameterization of the "data reward" function instantiates different manipulation schemes. We showcase data augmentation that learns a text transformation network, and data weighting that dynamically adapts the data sample importance. Experiments show the resulting algorithms significantly improve the image and text classification performance in low data regime and class-imbalance problems.


A robot hand taught itself to solve a Rubik's Cube after creating its own training regime

#artificialintelligence

Over a year ago, OpenAI, the San Francisco–based for-profit AI research lab, announced that it had trained a robotic hand to manipulate a cube with remarkable dexterity. That might not sound earth-shattering. But in the AI world, it was impressive for two reasons. First, the hand had taught itself how to fidget with the cube using a reinforcement-learning algorithm, a technique modeled on the way animals learn. Second, all the training had been done in simulation, but it managed to successfully translate to the real world.


10 Essential Data Science Packages for Python

#artificialintelligence

Pandas is a powerful and flexible data analysis library written in Python. In particular, I enjoy using it for its data structures, such as the DataFrame, the time series manipulation and analysis, and the numerical data tables. Many business-side employees of large organizations and startups can easily pick up Pandas to perform analysis. Plus, it's fairly easy to learn, and it rivals competing libraries in terms of its features in data analysis.