mit researcher
Will AI Take Your Job? Maybe Not Just Yet, One Study Says
Will artificial intelligence take our jobs? If you listen to Silicon Valley executives talking about the capabilities of today's cutting edge AI systems, you might think the answer is "yes, and soon." But a new paper published by MIT researchers suggests automation in the workforce might happen slower than you think. The researchers at MIT's computer science and artificial intelligence laboratory studied not only whether AI was able to perform a task, but also whether it made economic sense for firms to replace humans performing those tasks in the wider context of the labor market. They found that while computer vision AI is today capable of automating tasks that account for 1.6% of worker wages in the U.S. economy (excluding agriculture), only 23% of those wages (0.4% of the economy as a whole) would, at today's costs, be cheaper for firms to automate instead of paying human workers.
- Banking & Finance > Economy (0.71)
- Information Technology (0.56)
New technique helps robots pack objects into a tight space
MIT researchers are using generative AI models to help robots more efficiently solve complex object manipulation problems, such as packing a box with different objects. Anyone who has ever tried to pack a family-sized amount of luggage into a sedan-sized trunk knows this is a hard problem. For the robot, solving the packing problem involves satisfying many constraints, such as stacking luggage so suitcases don't topple out of the trunk, heavy objects aren't placed on top of lighter ones, and collisions between the robotic arm and the car's bumper are avoided. Some traditional methods tackle this problem sequentially, guessing a partial solution that meets one constraint at a time and then checking to see if any other constraints were violated. With a long sequence of actions to take, and a pile of luggage to pack, this process can be impractically time consuming.
New technique helps robots pack objects into a tight space
MIT researchers are using generative AI models to help robots more efficiently solve complex object manipulation problems, such as packing a box with different objects. Anyone who has ever tried to pack a family-sized amount of luggage into a sedan-sized trunk knows this is a hard problem. For the robot, solving the packing problem involves satisfying many constraints, such as stacking luggage so suitcases don't topple out of the trunk, heavy objects aren't placed on top of lighter ones, and collisions between the robotic arm and the car's bumper are avoided. Some traditional methods tackle this problem sequentially, guessing a partial solution that meets one constraint at a time and then checking to see if any other constraints were violated. With a long sequence of actions to take, and a pile of luggage to pack, this process can be impractically time consuming.
AI helps robots manipulate objects with their whole bodies
MIT researchers developed an AI technique that enables a robot to develop complex plans for manipulating an object using its entire hand, not just the fingertips. This model can generate effective plans in about a minute using a standard laptop. Here, a robot attempts to rotate a bucket 180 degrees. Imagine you want to carry a large, heavy box up a flight of stairs. You might spread your fingers out and lift that box with both hands, then hold it on top of your forearms and balance it against your chest, using your whole body to manipulate the box.
AI helps robots manipulate objects with their whole bodies
MIT researchers developed an AI technique that enables a robot to develop complex plans for manipulating an object using its entire hand, not just the fingertips. This model can generate effective plans in about a minute using a standard laptop. Here, a robot attempts to rotate a bucket 180 degrees. Imagine you want to carry a large, heavy box up a flight of stairs. You might spread your fingers out and lift that box with both hands, then hold it on top of your forearms and balance it against your chest, using your whole body to manipulate the box.
A step toward safe and reliable autopilots for flying
MIT researchers developed a machine-learning technique that can autonomously drive a car or fly a plane through a very difficult "stabilize-avoid" scenario, in which the vehicle must stabilize its trajectory to arrive at and stay within some goal region, while avoiding obstacles. In the film "Top Gun: Maverick," Maverick, played by Tom Cruise, is charged with training young pilots to complete a seemingly impossible mission -- to fly their jets deep into a rocky canyon, staying so low to the ground they cannot be detected by radar, then rapidly climb out of the canyon at an extreme angle, avoiding the rock walls. Spoiler alert: With Maverick's help, these human pilots accomplish their mission. A machine, on the other hand, would struggle to complete the same pulse-pounding task. To an autonomous aircraft, for instance, the most straightforward path toward the target is in conflict with what the machine needs to do to avoid colliding with the canyon walls or staying undetected.
- Media > Film (0.55)
- Leisure & Entertainment (0.55)
Researchers create a tool for accurately simulating complex systems
Researchers often use simulations when designing new algorithms, since testing ideas in the real world can be both costly and risky. But since it's impossible to capture every detail of a complex system in a simulation, they typically collect a small amount of real data that they replay while simulating the components they want to study. Known as trace-driven simulation (the small pieces of real data are called traces), this method sometimes results in biased outcomes. This means researchers might unknowingly choose an algorithm that is not the best one they evaluated, and which will perform worse on real data than the simulation predicted that it should. MIT researchers have developed a new method that eliminates this source of bias in trace-driven simulation.
Robotic hand can identify objects with just one grasp
MIT researchers developed a soft-rigid robotic finger that incorporates powerful sensors along its entire length, enabling them to produce a robotic hand that could accurately identify objects after only one grasp. Inspired by the human finger, MIT researchers have developed a robotic hand that uses high-resolution touch sensing to accurately identify an object after grasping it just one time. Many robotic hands pack all their powerful sensors into the fingertips, so an object must be in full contact with those fingertips to be identified, which can take multiple grasps. Other designs use lower-resolution sensors spread along the entire finger, but these don't capture as much detail, so multiple regrasps are often required. Instead, the MIT team built a robotic finger with a rigid skeleton encased in a soft outer layer that has multiple high-resolution sensors incorporated under its transparent "skin."
MIT researchers are one step closer to perfecting self-repairing robot bees
"Hated in the Nation," an episode of Netflix's dystopian sci-fi series "Black Mirror," predicted it: Thousands of robotic bees buzz from flower to flower, pollinating plants to make up for declining insect populations. And while the episode's robots eventually turn against their human inventors, killing over 387,000 people by ramming their artificial stingers into victims' heads, the MIT scientists working on perfecting today's aerial robots likely believe we don't need to worry about that. Despite the show's foreboding take on robotic bees, researchers at the Massachusetts Institute of Technology are one step closer to perfecting the artificial aerial critters. In a paper published March 15, a group of researchers at MIT showed that using resilient muscle-like actuators and self-repairing technology can vastly improve the robustness of robotic bees. "Insects flying are incredibly difficult to understand," said Kevin Chen, an assistant professor at MIT, head of the institute's Soft and Micro Robotics Laboratory, and the senior author of the paper.
Resilient bug-sized robots keep flying even after wing damage
MIT researchers have developed resilient artificial muscles that can enable insect-scale aerial robots to effectively recover flight performance after suffering severe damage. It is estimated that a foraging bee bumps into a flower about once per second, which damages its wings over time. Yet despite having many tiny rips or holes in their wings, bumblebees can still fly. Aerial robots, on the other hand, are not so resilient. Poke holes in the robot's wing motors or chop off part of its propellor, and odds are pretty good it will be grounded.
- Materials (0.36)
- Transportation > Air (0.30)