Machine Learning

Google's neural network is a multi-tasking pro


Trying to train a neural network to do an additional task usually makes it much worse at its first. The company's multi-tasking machine learning system called MultiModal was able to learn how to detect objects in images, provide captions, recognize speech, translate between four pairs of languages as well as parse grammar and syntax. In a blog post the company said, "It is not only possible to achieve good performance while training jointly on multiple tasks, but on tasks with limited quantities of data, the performance actually improves. To our surprise, this happens even if the tasks come from different domains that would appear to have little in common, e.g., an image recognition task can improve performance on a language task."

An algorithm customizes exoskeletons to fit a person's needs


After all, a powered exoskeleton could change the lives of people who have mobility issues, whether due to age, injury or disease. Adapting them to individual humans is a difficult and time-consuming process. Rather than calibrate the device once and use it on all the participants, though, the researchers had the participants walk on a treadmill while the powered exoskeleton helped. Not only is this genetic algorithm important for creating exoskeletons that can fit a wider number of people, but it also hints that we may be able to create more complex assistive devices.