csail
The way whales communicate is closer to human language than we realized
A team of researchers led by Pratyusha Sharma at MIT's Computer Science and Artificial Intelligence Lab (CSAIL) working with Project CETI, a nonprofit focused on using AI to understand whales, used statistical models to analyze whale codas and managed to identify a structure to their language that's similar to features of the complex vocalizations humans use. Their findings represent a tool future research could use to decipher not just the structure but the actual meaning of whale sounds. The team analyzed recordings of 8,719 codas from around 60 whales collected by the Dominica Sperm Whale Project between 2005 and 2018, using a mix of algorithms for pattern recognition and classification. They found that the way the whales communicate was not random or simplistic, but structured depending on the context of their conversations. This allowed them to identify distinct vocalizations that hadn't been previously picked up on.
Using machine learning to discover stiff and tough microstructures
A new computational pipeline developed over three years efficiently identifies stiff and tough microstructures suitable for 3D printing in a wide range of engineering applications. The approach greatly reduces the development time for high-performance microstructure composites and requires minimal materials science expertise. Every time you smoothly drive from point A to point B, you're not just enjoying the convenience of your car, but also the sophisticated engineering that makes it safe and reliable. Beyond its comfort and protective features lies a lesser-known yet crucial aspect: the expertly optimized mechanical performance of microstructured materials. These materials, integral yet often unacknowledged, are what fortify your vehicle, ensuring durability and strength on every journey. Luckily, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) scientists have thought about this for you.
Drones navigate unseen environments with liquid neural networks
Makram Chahine, a PhD student in electrical engineering and computer science and an MIT CSAIL affiliate, leads a drone used to test liquid neural networks. In the vast, expansive skies where birds once ruled supreme, a new crop of aviators is taking flight. These pioneers of the air are not living creatures, but rather a product of deliberate innovation: drones. Rather, they're avian-inspired marvels that soar through the sky, guided by liquid neural networks to navigate ever-changing and unseen environments with precision and ease. Inspired by the adaptable nature of organic brains, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have introduced a method for robust flight navigation agents to master vision-based fly-to-target tasks in intricate, unfamiliar environments.
A simpler path to better computer vision
Before a machine-learning model can complete a task, such as identifying cancer in medical images, the model must be trained. Training image classification models typically involves showing the model millions of example images gathered into a massive dataset. To avoid these pitfalls, researchers can use image generation programs to create synthetic data for model training. But these techniques are limited because expert knowledge is often needed to hand-design an image generation program that can create effective training data. Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere took a different approach.
Reprogrammable materials selectively self-assemble
While automated manufacturing is ubiquitous today, it was once a nascent field birthed by inventors such as Oliver Evans, who is credited with creating the first fully automated industrial process, in flour mill he built and gradually automated in the late 1700s. The processes for creating automated structures or machines are still very top-down, requiring humans, factories, or robots to do the assembling and making. However, the way nature does assembly is ubiquitously bottom-up; animals and plants are self-assembled at a cellular level, relying on proteins to self-fold into target geometries that encode all the different functions that keep us ticking. For a more bio-inspired, bottom-up approach to assembly, then, human-architected materials need to do better on their own. Making them scalable, selective, and reprogrammable in a way that could mimic nature's versatility means some teething problems, though.
Soft robots that grip with the right amount of force
Tool use has long been a hallmark of human intelligence, as well as a practical problem to solve for a vast array of robotic applications. But machines are still wonky at exerting just the right amount of force to control tools that aren't rigidly attached to their hands. To manipulate said tools more robustly, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), in collaboration with the Toyota Research Institute (TRI), have designed a system that can grasp tools and apply the appropriate amount of force for a given task, like squeegeeing up liquid or writing out a word with a pen. The system, dubbed Series Elastic End Effectors, or SEED, uses soft bubble grippers and embedded cameras to map how the grippers deform over a six-dimensional space (think of an airbag inflating and deflating) and apply force to a tool. Using six degrees of freedom, the object can be moved left and right, up or down, back and forth, roll, pitch, and yaw.
Global Big Data Conference
Human languages are notoriously complex, and linguists have long thought it would be impossible to teach a machine how to analyze speech sounds and word structures in the way human investigators do. But researchers at MIT, Cornell University, and McGill University have taken a step in this direction. They have demonstrated an artificial intelligence system that can learn the rules and patterns of human languages on its own. When given words and examples of how those words change to express different grammatical functions (like tense, case, or gender) in one language, this machine-learning model comes up with rules that explain why the forms of those words change. For instance, it might learn that the letter "a" must be added to end of a word to make the masculine form feminine in Serbo-Croatian.
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.40)
Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images
To represent a 3D scene from a 2D image, a light field network encodes the 360-degree light field of the 3D scene into a neural network that directly maps each camera ray to the color observed by that ray. The new machine-learning system can generate a 3D scene from an image about 15,000 times faster than other methods. Humans are pretty good at looking at a single two-dimensional image and understanding the full three-dimensional scene that it captures. Artificial intelligence agents are not. Yet a machine that needs to interact with objects in the world -- like a robot designed to harvest crops or assist with surgery -- must be able to infer properties about a 3D scene from observations of the 2D images it's trained on.
- North America > United States > Oklahoma > Beaver County (0.61)
- Asia > Singapore (0.05)
Global Big Data Conference
With e-commerce orders pouring in, a warehouse robot picks mugs off a shelf and places them into boxes for shipping. Everything is humming along, until the warehouse processes a change and the robot must now grasp taller, narrower mugs that are stored upside down. Reprogramming that robot involves hand-labeling thousands of images that show it how to grasp these new mugs, then training the system all over again. But a new technique developed by MIT researchers would require only a handful of human demonstrations to reprogram the robot. This machine-learning method enables a robot to pick up and place never-before-seen objects that are in random poses it has never encountered.
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.40)
A flexible way to grab items with feeling
The GelSight Fin Ray gripper holds a glass Mason jar with its tactile sensing. The notion of a large metallic robot that speaks in monotone and moves in lumbering, deliberate steps is somewhat hard to shake. But practitioners in the field of soft robotics have an entirely different image in mind -- autonomous devices composed of compliant parts that are gentle to the touch, more closely resembling human fingers than R2-D2 or Robby the Robot. That model is now being pursued by Professor Edward Adelson and his Perceptual Science Group at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). In a recent project, Adelson and Sandra Liu -- a mechanical engineering PhD student at CSAIL -- have developed a robotic gripper using novel "GelSight Fin Ray" fingers that, like the human hand, is supple enough to manipulate objects.