Goto

Collaborating Authors

Results


Team builds first living robots that can reproduce

Robohub

AI-designed (C-shaped) organisms push loose stem cells (white) into piles as they move through their environment. To persist, life must reproduce. Over billions of years, organisms have evolved many ways of replicating, from budding plants to sexual animals to invading viruses. Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction--and applied their discovery to create the first-ever, self-replicating living robots. The same team that built the first living robots ("Xenobots," assembled from frog cells--reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble "baby" Xenobots inside their Pac-Man-shaped "mouth"--that, a few days later, become new Xenobots that look and move just like themselves.


Team builds first living robots that can reproduce: AI-designed Xenobots reveal entirely new form of biological self-replication--promising for regenerative medicine

#artificialintelligence

Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction -- and applied their discovery to create the first-ever, self-replicating living robots. The same team that built the first living robots ("Xenobots," assembled from frog cells -- reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble "baby" Xenobots inside their Pac-Man-shaped "mouth" -- that, a few days later, become new Xenobots that look and move just like themselves. And then these new Xenobots can go out, find cells, and build copies of themselves. "With the right design -- they will spontaneously self-replicate," says Joshua Bongard, Ph.D., a computer scientist and robotics expert at the University of Vermont who co-led the new research. The results of the new research were published November 29, 2021, in the Proceedings of the National Academy of Sciences.


Team builds first living robots--that can reproduce

#artificialintelligence

Over billions of years, organisms have evolved many ways of replicating, from budding plants to sexual animals to invading viruses. Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction--and applied their discovery to create the first-ever, self-replicating living robots. The same team that built the first living robots ("Xenobots," assembled from frog cells--reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble "baby" Xenobots inside their Pac-Man-shaped "mouth"--that, a few days later, become new Xenobots that look and move just like themselves. And then these new Xenobots can go out, find cells, and build copies of themselves. "With the right design--they will spontaneously self-replicate," says Joshua Bongard, Ph.D., a computer scientist and robotics expert at the University of Vermont who co-led the new research.


Classifying Breast Histopathology Images with a Ductal Instance-Oriented Pipeline

arXiv.org Artificial Intelligence

In this study, we propose the Ductal Instance-Oriented Pipeline (DIOP) that contains a duct-level instance segmentation model, a tissue-level semantic segmentation model, and three-levels of features for diagnostic classification. Based on recent advancements in instance segmentation and the Mask R-CNN model, our duct-level segmenter tries to identify each ductal individual inside a microscopic image; then, it extracts tissue-level information from the identified ductal instances. Leveraging three levels of information obtained from these ductal instances and also the histopathology image, the proposed DIOP outperforms previous approaches (both feature-based and CNN-based) in all diagnostic tasks; for the four-way classification task, the DIOP achieves comparable performance to general pathologists in this unique dataset. The proposed DIOP only takes a few seconds to run in the inference time, which could be used interactively on most modern computers. More clinical explorations are needed to study the robustness and generalizability of this system in the future.


Machine learning can help us understand conversations about death

#artificialintelligence

IMAGE: Robert Gramling is the Holly and Bob Miller Chair in Palliative Medicine at the University of Vermont Larner College of Medicine. In a new paper, Gramling and his colleagues show... view more Some of the most important, and difficult, conversations in healthcare are the ones that happen amid serious and life-threatening illnesses. Discussions of the treatment options and prognoses in these settings are a delicate balance for doctors and nurses who are dealing with people at their most vulnerable point and may not fully understand what the future holds. Now researchers at the University of Vermont's Vermont Conversation Lab have used machine learning and natural language processing to better understand what those conversations look like, which could eventually help healthcare providers improve their end-of-life communication. "We want to understand this complex thing called a conversation," says Robert Gramling, director of the lab in UVM's Larner College of Medicine who led the study, published December 9 in the journal Patient Education and Counselling.


Machine learning can create meaningful conversations on death - ET CIO

#artificialintelligence

New York, Researchers at University of Vermont have used machine learning and natural language processing (NLP) to better understand conversations about death, which could eventually help doctors improve their end-of-life communication. Some of the most important, and difficult, conversations in healthcare are the ones that happen amid serious and life-threatening illnesses. Discussions of the treatment options and prognoses in these settings are a delicate balance for doctors and nurses who are dealing with people at their most vulnerable point and may not fully understand what the future holds. "We want to understand this complex thing called a conversation. Our major goal is to scale up the measurement of conversations so we can re-engineer the healthcare system to communicate better," said Robert Gramling, director of the Vermont Conversation Lab in the study published in the journal Patient Education and Counselling.


UVM Study: AI Can Detect Depression in a Child's Speech

#artificialintelligence

A machine learning algorithm can detect signs of anxiety and depression in the speech patterns of young children, potentially providing a fast and easy way of diagnosing conditions that are difficult to spot and often overlooked in young people, according to new research published in the Journal of Biomedical and Health Informatics. Around one in five children suffer from anxiety and depression, collectively known as "internalizing disorders." But because children under the age of eight can't reliably articulate their emotional suffering, adults need to be able to infer their mental state, and recognise potential mental health problems. Waiting lists for appointments with psychologists, insurance issues, and failure to recognise the symptoms by parents all contribute to children missing out on vital treatment. "We need quick, objective tests to catch kids when they are suffering," says Ellen McGinnis, a clinical psychologist at the University of Vermont Medical Center's Vermont Center for Children, Youth and Families and lead author of the study.


AI can detect anxiety and depression in a child's speech

#artificialintelligence

The study conducted by researchers at the University of Vermont in the USA suggests a machine learning algorithm might provide a fast and easy way of diagnosing anxiety and depression – conditions that are difficult to spot and often overlooked in young people. "We need quick, objective tests to catch kids when they are suffering," said study lead author Ellen McGinnis, who is a clinical psychologist at the university's Medical Centre's Vermont Centre for Children, Youth and Families. "The majority of kids under eight are undiagnosed," she added. Early diagnosis of these conditions is critical as children respond well to treatment while their brains are still developing, according to the researchers, but if they are left untreated they are at greater risk of substance abuse and suicide later in life. Standard diagnosis involves a 60-90-minute semi-structured interview with a trained clinician and their primary caregiver.


UAFS: Uncertainty-Aware Feature Selection for Problems with Missing Data

arXiv.org Machine Learning

Missing data are a concern in many real world data sets and imputation methods are often needed to estimate the values of missing data, but data sets with excessive missingness and high dimensionality challenge most approaches to imputation. Here we show that appropriate feature selection can be an effective preprocessing step for imputation, allowing for more accurate imputation and subsequent model predictions. The key feature of this preprocessing is that it incorporates uncertainty: by accounting for uncertainty due to missingness when selecting features we can reduce the degree of missingness while also limiting the number of uninformative features being used to make predictive models. We introduce a method to perform uncertainty-aware feature selection (UAFS), provide a theoretical motivation, and test UAFS on both real and synthetic problems, demonstrating that across a variety of data sets and levels of missingness we can improve the accuracy of imputations. Improved imputation due to UAFS also results in improved prediction accuracy when performing supervised learning using these imputed data sets. Our UAFS method is general and can be fruitfully coupled with a variety of imputation methods.


Interoceptive robustness through environment-mediated morphological development

arXiv.org Artificial Intelligence

Typically, AI researchers and roboticists try to realize intelligent behavior in machines by tuning parameters of a predefined structure (body plan and/or neural network architecture) using evolutionary or learning algorithms. Another but not unrelated longstanding property of these systems is their brittleness to slight aberrations, as highlighted by the growing deep learning literature on adversarial examples. Here we show robustness can be achieved by evolving the geometry of soft robots, their control systems, and how their material properties develop in response to one particular interoceptive stimulus (engineering stress) during their lifetimes. By doing so we realized robots that were equally fit but more robust to extreme material defects (such as might occur during fabrication or by damage thereafter) than robots that did not develop during their lifetimes, or developed in response to a different interoceptive stimulus (pressure). This suggests that the interplay between changes in the containing systems of agents (body plan and/or neural architecture) at different temporal scales (evolutionary and developmental) along different modalities (geometry, material properties, synaptic weights) and in response to different signals (interoceptive and external perception) all dictate those agents' abilities to evolve or learn capable and robust strategies.