Goto

Collaborating Authors

 bongard


No-brainer: Morphological Computation driven Adaptive Behavior in Soft Robots

Mertan, Alican, Cheney, Nick

arXiv.org Artificial Intelligence

It is prevalent in contemporary AI and robotics to separately postulate a brain modeled by neural networks and employ it to learn intelligent and adaptive behavior. While this method has worked very well for many types of tasks, it isn't the only type of intelligence that exists in nature. In this work, we study the ways in which intelligent behavior can be created without a separate and explicit brain for robot control, but rather solely as a result of the computation occurring within the physical body of a robot. Specifically, we show that adaptive and complex behavior can be created in voxel-based virtual soft robots by using simple reactive materials that actively change the shape of the robot, and thus its behavior, under different environmental cues. We demonstrate a proof of concept for the idea of closed-loop morphological computation, and show that in our implementation, it enables behavior mimicking logic gates, enabling us to demonstrate how such behaviors may be combined to build up more complex collective behaviors. Keywords: Soft robotics Adaptive behavior 1 Introduction and Background Recent advances in artificial intelligence and machine learning have benefited greatly from the rise of modern deep learning systems, ultimately aimed at artificial general intelligence [22]. The coming-of-age of these artificial neural network systems includes a long history of bio-inspiration, dating back to Mcculloch and Pitts [26]. Yet the processes behind biological intelligence reach far beyond systems and processes confined to the brain of living organisms. Our bias toward attributing intelligent behavior to the mind is far from new.


Investigating Premature Convergence in Co-optimization of Morphology and Control in Evolved Virtual Soft Robots

Mertan, Alican, Cheney, Nick

arXiv.org Artificial Intelligence

Evolving virtual creatures is a field with a rich history and recently it has been getting more attention, especially in the soft robotics domain. The compliance of soft materials endows soft robots with complex behavior, but it also makes their design process unintuitive and in need of automated design. Despite the great interest, evolved virtual soft robots lack the complexity, and co-optimization of morphology and control remains a challenging problem. Prior work identifies and investigates a major issue with the co-optimization process -- fragile co-adaptation of brain and body resulting in premature convergence of morphology. In this work, we expand the investigation of this phenomenon by comparing learnable controllers with proprioceptive observations and fixed controllers without any observations, whereas in the latter case, we only have the optimization of the morphology. Our experiments in two morphology spaces and two environments that vary in complexity show, concrete examples of the existence of high-performing regions in the morphology space that are not able to be discovered during the co-optimization of the morphology and control, yet exist and are easily findable when optimizing morphologies alone. Thus this work clearly demonstrates and characterizes the challenges of optimizing morphology during co-optimization. Based on these results, we propose a new body-centric framework to think about the co-optimization problem which helps us understand the issue from a search perspective. We hope the insights we share with this work attract more attention to the problem and help us to enable efficient brain-body co-optimization.


Glamour muscles: why having a body is not what it means to be embodied

Beaulieu, Shawn L., Kriegman, Sam

arXiv.org Artificial Intelligence

Embodiment has recently enjoyed renewed consideration as a means to amplify the faculties of smart machines. Proponents of embodiment seem to imply that optimizing for movement in physical space promotes something more than the acquisition of niche capabilities for solving problems in physical space. However, there is nothing in principle which should so distinguish the problem of action selection in physical space from the problem of action selection in more abstract spaces, like that of language. Rather, what makes embodiment persuasive as a means toward higher intelligence is that it promises to capture, but does not actually realize, contingent facts about certain bodies (living intelligence) and the patterns of activity associated with them. These include an active resistance to annihilation and revisable constraints on the processes that make the world intelligible. To be theoretically or practically useful beyond the creation of niche tools, we argue that "embodiment" cannot be the trivial fact of a body, nor its movement through space, but the perpetual negotiation of the function, design, and integrity of that body--that is, to participate in what it means to constitute a given body. It follows that computer programs which are strictly incapable of traversing physical space might, under the right conditions, be more embodied than a walking, talking robot. The accomplishments of artificial intelligence are legion.


There's Plenty of Room Right Here: Biological Systems as Evolved, Overloaded, Multi-scale Machines

Bongard, Joshua, Levin, Michael

arXiv.org Artificial Intelligence

The applicability of computational models to the biological world is an active topic of debate. We argue that a useful path forward results from abandoning hard boundaries between categories and adopting an observer-dependent, pragmatic view. Such a view dissolves the contingent dichotomies driven by human cognitive biases (e.g., tendency to oversimplify) and prior technological limitations in favor of a more continuous, gradualist view necessitated by the study of evolution, developmental biology, and intelligent machines. Efforts to re-shape living systems for biomedical or bioengineering purposes require prediction and control of their function at multiple scales. This is challenging for many reasons, one of which is that living systems perform multiple functions in the same place at the same time. We refer to this as "polycomputing" - the ability of the same substrate to simultaneously compute different things. This ability is an important way in which living things are a kind of computer, but not the familiar, linear, deterministic kind; rather, living things are computers in the broad sense of computational materials as reported in the rapidly-growing physical computing literature. We argue that an observer-centered framework for the computations performed by evolved and designed systems will improve the understanding of meso-scale events, as it has already done at quantum and relativistic scales. Here, we review examples of biological and technological polycomputing, and develop the idea that overloading of different functions on the same hardware is an important design principle that helps understand and build both evolved and designed systems. Learning to hack existing polycomputing substrates, as well as evolve and design new ones, will have massive impacts on regenerative medicine, robotics, and computer engineering.


How xenobots reshape our understanding of genetics

#artificialintelligence

Where in the embryo does the person reside? Morphogenesis – the formation of the body from an embryo – once seemed so mystifying that scholars presumed the body must somehow already exist in tiny form at conception. In the 17th century, the Dutch microscopist Nicolaas Hartsoeker illustrated this'preformationist' theory by drawing a foetal homunculus tucked into the head of a sperm. This idea finds modern expression in the notion that the body plan is encoded in our DNA. But the more we come to understand how cells produce shape and form, the more inadequate the idea of a genomic blueprint looks, too. What cells follow is not a blueprint; if they can be considered programmed at all, it's not with a plan of what to make, but with a set of rules to guide construction. One implication is that humans and other complex organisms are not the unique result of cells' behaviour, but only one of many possible outcomes.


Living machines: the first bio robots with Artificial Intelligence were born - OI Canadian

#artificialintelligence

As if it were a graphic novel by Science fiction, the first birth of robots called xenobots in the United States, which were made with frog cells. The xenobots are bio robots millimeter that could be replicated from themselves. Researchers from the universities of Vermont, Tufts and Harvard noted that in 2020 the first of their kind were assembled from frog cells. These organisms were designed on a computer and assembled by hand; they can swim in a petri dish, find individual cells, and collect hundreds of them, the University of Vermont reported late last November. These robots that can have "children" they are shaped like Pac-man and it keeps these cells inside its "mouth", they are also capable of assembling "babies" that look and move in the same way as they do.


Team builds first living robots that can reproduce

Robohub

AI-designed (C-shaped) organisms push loose stem cells (white) into piles as they move through their environment. To persist, life must reproduce. Over billions of years, organisms have evolved many ways of replicating, from budding plants to sexual animals to invading viruses. Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction--and applied their discovery to create the first-ever, self-replicating living robots. The same team that built the first living robots ("Xenobots," assembled from frog cells--reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble "baby" Xenobots inside their Pac-Man-shaped "mouth"--that, a few days later, become new Xenobots that look and move just like themselves.


Scientists made tiny xenobots out of frog cells. Now they say those robots can reproduce.

#artificialintelligence

Life finds a way, and the same goes for even robots, according to a group of scientists who say the first living robotic life forms can reproduce. In January 2020, a team of scientists from the University of Vermont, Tufts University and Harvard University took stem cells from African clawed frog embryos and formed them into tiny living creatures called xenobots. The xenobots, which are less than 0.04 inches wide, were able to move on their own, communicate amongst each other and heal themselves from an injury, making them the first-ever living robots. But over one year later, the computer-designed creatures have begun to do "something that's never been observed before." What the team of scientists discovered was the xenobots would move around their environment and find single cells.


Team builds first living robots that can reproduce: AI-designed Xenobots reveal entirely new form of biological self-replication--promising for regenerative medicine

#artificialintelligence

Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction -- and applied their discovery to create the first-ever, self-replicating living robots. The same team that built the first living robots ("Xenobots," assembled from frog cells -- reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble "baby" Xenobots inside their Pac-Man-shaped "mouth" -- that, a few days later, become new Xenobots that look and move just like themselves. And then these new Xenobots can go out, find cells, and build copies of themselves. "With the right design -- they will spontaneously self-replicate," says Joshua Bongard, Ph.D., a computer scientist and robotics expert at the University of Vermont who co-led the new research. The results of the new research were published November 29, 2021, in the Proceedings of the National Academy of Sciences.


Team builds first living robots--that can reproduce

#artificialintelligence

Over billions of years, organisms have evolved many ways of replicating, from budding plants to sexual animals to invading viruses. Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction--and applied their discovery to create the first-ever, self-replicating living robots. The same team that built the first living robots ("Xenobots," assembled from frog cells--reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble "baby" Xenobots inside their Pac-Man-shaped "mouth"--that, a few days later, become new Xenobots that look and move just like themselves. And then these new Xenobots can go out, find cells, and build copies of themselves. "With the right design--they will spontaneously self-replicate," says Joshua Bongard, Ph.D., a computer scientist and robotics expert at the University of Vermont who co-led the new research.