Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. If you thought the "vision" video is a bit much, the capabilities video is less singing and more interesting: Sebastian Thrun's self-flying car company Cora has a passenger drone that may be somewhat less terrible than most other passenger drones: I like Cora primarily because it has wings that can generate lift even if the electrical systems or software systems fail. I'm still not sold on the idea of you not needing a pilot's license to be in one, but this design does seem significantly more survivable than most.
If you're in the market to buy fresh papayas, it can be a challenge to figure out ripeness based on peel color without also squeezing the fruit to test for softness. A Brazilian research group could make life easier for both shoppers and producers in the near future with a computer vision algorithm that estimates ripeness based on images alone. Last year, the United States alone imported more than US $107 million worth of fresh papayas as the world's largest papaya import market. The computer vision software could enable papaya growers to maximize the value of their fruit by sending the ripest papayas to local markets and saving less ripe papayas for export, says Douglas Fernandes Barbin, a researcher in the department of food engineering at the University of Campinas in São Paulo, Brazil. But he and his colleagues also want to help individual shoppers get their money's worth in grocery aisles.
Kids like to touch things. Kids like to whack things. This is usually fine when the thing is a toy, but it can be a problem when the thing is a robot. We've written about children beating robots up before, and it seems like it's an inevitability when kids (or even some adults) meet a robot for the first time: They want to see what it can do and how it reacts to things, and that can result in some behaviors and interactions that would be pretty upsetting if they were targeted at something alive. That is to say, sometimes kids are abusive towards robots, especially when there aren't any consequences to the things that they do.
Seven years ago, iRobot introduced us to Ava, a sort of tech demonstrator designed to show how robots were capable of doing things like--well, the company wasn't entirely sure, but telepresence was one of the ideas. The robot's autonomous navigation was certainly impressive, and Ava could avoid moving obstacles at the speed of a brisk walk, which wasn't something we'd seen a lot of back in 2011. In 2012, iRobot announced RP-VITA, a medical telepresence robot based on the Ava platform. And in 2013, iRobot and Cisco collaborated on the Ava 500, a commercial telepresence system with integrated autonomous navigation. We haven't heard too much about it since then, but that's because iRobot had a secret plan for Ava: Today, iRobot is annoucing that it has spun Ava off into its own company, called Ava Robotics, which is (re?)launching the Ava platform as a "new video collaboration solution that offers users'practical teleportation' with the ability to transform remote work and site visits."
The deep neural networks that power today's artificial intelligence systems work in mysterious ways. They're black boxes: A question goes in ("Is this a photo of a cat?" "What's the best next move in this game of Go?" "Should this self-driving car accelerate at this yellow light?"), and an answer comes out the other side. We may not know exactly how a black box AI system works, but we know that it does work. But a new study that mapped a neural network to the components within a simple yeast cell allowed researchers to watch the AI system at work. And it gave them insights into cell biology in the process.
There are any number of robotics development platforms out there, and we've written about most of them--TurtleBots, iRobot Creates, and more recently robots like Misty. Generally, these platforms are intended to be used for experimenting with sensors and software, or for more socially-oriented applications that don't involve much in the way of lifting or moving stuff. A Silicon Valley startup called Ubiquity Robotics believes that there's an opportunity here, and they're crowdfunding a robot called Magni that's specifically designed to handle large payloads for long durations. It comes with sensing and computing out of the box, and Ubiquity hopes it'll enable hobbyists to create a new generation of practical robotic solutions. Here's what you get with Magni: In addition, Ubiquity is offering Loki, a small and more or less affordable learning platform that you can use to develop applications for Magni.
In the years to come, what will be the biggest improvement in AI-powered digital assistants? It's likely to be the ability to accommodate a fundamental aspect of being human: The fact that we all have different personas, we show different facets of ourselves depending on where we are and who we are with, and our personas change over time. And different personas want different things from their AI assistants. Assistants that can understand your personal circumstances are less likely to remind you to pick up your rash prescription as you drive by the pharmacy if there are other people in the car, bug you about work email at home, or keep suggesting fun nightclubs if you've just had a baby. That was the message from Sunday's panel on "Designing the Next Wave of Natural Language and AI" at the SXSW festival in Austin, Texas.
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. We were at the 2018 Human Robot Interaction conference all this week, and on Wednesday, there was a special video session. The audience, who was provided with popcorn, voted by applause, and here are the top three videos.
It's easy for us to forget that the vast majority of the world doesn't really care about (or even know about) robots. With that in mind, it's understandable why most roboticists consider robots operating "in the wild" to be "anywhere that isn't the controlled environment of my lab." But there are "real world" environments, and then there's the actual wild, and we almost never hear about research happening there. This is too bad, because we don't have nearly enough appreciation for how robots can potentially be used to mitigate problems throughout the developing world. There's also very little research into how different cultures react to robots with a social component--most human-robot interaction (HRI) studies rely on local participants who are easy (and cheap) to recruit, and are consequently full of students, which is a terrible representation of most of the rest of the world.
Kids are not well known for their conflict resolution skills. That's part of being a kid, I guess, but they've got to learn these skills at some point, or they turn into teens without conflict resolution skills. And then you end up with adults that only know how to solve problems by throwing tantrums of one sort or another: We've all met people like that. It would be great if there was a way to teach children how to handle disagreements equitably, and there is: It's called teachers (or adults in general). But having adults around all the time gets expensive.