I noticed that bulk unsorted Lego sells for roughly €10 per kilogram (about US $11/kg), boxed sets go for €40/kg, and collections of rare parts and Lego Technic pieces (the sort used to build complex mechanical creations) go for hundreds of euros per kilo. A camera captures images of individual pieces [second from top] and a neural net identifies them [second from bottom]. Air puffs from valves [bottom] knock sorted pieces into bins. I started building my neural net system in earnest.
This biologically inspired approach, they hope, could help robots navigate dynamic environments without requiring advanced, costly sensors and computationally intensive algorithms. More specifically, we propose a new methodology that enables the first demonstration of high-resolution 3D through-wall imaging of completely unknown areas, using only WiFi signals and unmanned aerial vehicles. From this point cloud, the garment is segmented and a custom Wrinkleness Local Descriptor (WiLD) is computed to determine the location of the present wrinkles. If you haven't seen this 1911 film featuring a humanoid robot driving a car, you've been missing out on learning about all of the potential self-driving car catastrophies that could happen to you: Did you know that if you let a robot drive your car, you could end up in space?
But machine learning systems, looking at that data, can tell something else about your home besides its energy use--they can tell if you are home, or if you are not. In a recent paper, Jin and his colleagues demonstrated that machine learning systems can be trained to detect occupancy without any initial information from a home owner. Using this assumption, the machine learning algorithms were able to tease out more detailed characteristics about power consumption when a home is occupied; they then are able to tell when someone is home or not, even when that person's patterns are outside the norm. "Right now, meters are sending accurate information about energy consumption.
That's Intel's message as it prepares to bring technology to the 2018 Winter Olympics and future Olympic games. Intel was far more specific about the technologies it plans to deploy for the Olympics--drones, virtual reality, 5G communications, and artificial intelligence. Drones, according to Intel CEO Krzanich, will have two roles at the Olympics--entertainment, in the form of light shows, and to carry cameras for broadcast and other purposes. And going beyond the company's current eight-year sponsorship deal, Krzanich said, Intel expects to have new technologies that are Olympic contenders, including more AI, deeper virtual experiences, and new roles for drones and autonomous vehicles.
If we succeed in abstracting away the detailed molecular reactions while retaining a detailed, cellular-level resolution, human brain simulation comes much closer. If we could now go further and just bypass billions of years of iterations in biological design, leaving aside all the detailed biological reactions, and mimicking just all the input/output transfer functions of the human brain in some kind of deep learning network, we might be able to achieve brainlike capabilities even earlier. Only now are humans realizing that the human brain, as an organ belonging to an individual, already has superhuman capabilities: Every human brain embodies layers upon layers of knowledge and experience developed by all the other brains, present and past, who have contributed to building our societies, cultures, and physical environment. Furthermore, in the same way that the human brain embodies the physical and cultural world, it also embodies the technologies we create, including any artificial brains we may create.
Rats are nimble navigators, able to find their way around, under, and over obstacles, and through the tightest spaces. Roboticists have long dreamed of giving their creations similar navigation skills. At the Queensland University of Technology, in Brisbane, Australia, Michael Milford and his collaborators have spent the past 14 years honing a robot navigation system modeled on the brains of rats. This biologically-inspired approach, they hope, could help robots navigate dynamic environments without requiring advanced, costly sensors and computationally-intensive algorithms.
Rosa recently took steps to scale up the research on general AI by founding the AI Roadmap Institute and launching the General AI Challenge. In some rounds, participants will be tasked with designing algorithms and programming AI agents. The Challenge kicked off on 15 February with a six-month "warm-up" round dedicated to building gradually learning AI agents. The tasks were specifically designed to test gradual learning potential, so they can serve as guidance for the developers.
Last month, we wrote about ETH Zurich's Omnicopter, a flying cube with rotors providing thrust in lots of different directions that allow the drone to translate and rotate arbitrarily. A team of undergrads at ETH Zurich has taken the idea behind the Omnicopter and designed an even more versatile flying robot. Voliro is part of a focus project at ETH Zurich's Autonomous Systems Lab that's intended to give students in the last year of their undergraduate degrees "the opportunity to design a complete system from scratch," which seems like a fantastic way of making the transition into graduate school with some practical robotics experience. It probably won't shock you to learn that VertiGo was also a focus project from ETH Zurich's Autonomous Systems Lab, in partnership with Disney Research.
I suppose you could decide that this project from MIT's Tangible Media Group isn't really a robot, but I think it's arguably robotic enough (and definitely cool enough) that we can let it slide for this week: We present AnimaStage: a hands-on animated craft platform based on an actuated stage. At the end of every semester, UC Berkeley has a design showcase in Jacobs Hall. My modified Racing Roomba takes on the obstacle course at UC Berkeley's annual student vehicle challenge. If so, they didn't put it on this table: Two modules of EJBot propeller-type climbing robot which use a hybrid actuation system.
Then it moves on to other suspicious spots inside the stomach--jab, jab jab! So they mounted onto a mobility scooter a robot arm, and equipped both the scooter and the arm with depth cameras similar to the Microsoft Kinect Sensor, which is used with Xbox. When the user aims a laser beam at the object she wants, the robot arm moves to that object, the camera scans it, and the team's grasp detection algorithm determines how to maneuver itself in order to pick it up. To help, researchers at MIT's computer science and artificial intelligence laboratory came up with a guiding system based on vibration feedback.