Goto

Collaborating Authors

 air bubble


Bubble wrap-like material could help insulate glass windows

Popular Science

Only five millimeters of this experimental material called MOCHI can shield your hand from a flame. Breakthroughs, discoveries, and DIY tips sent every weekday. A well-placed window can brighten a room with natural light and offer scenic views of the outside world. Buildings consume around 40 percent of society's energy production, and much of that energy is wasted due to poor insulation in the winter and too much heat retention during the summer. Even the most eco-friendly windows inevitably add to this energy drain.


They're sweets, but not as you know them - why freeze-dried candy is trending

BBC News

What are freeze-dried sweets and why are they popular? When Savannah Louise West first tasted freeze-dried gummies, she was intrigued. I think the crunch is so satisfying, and I find it interesting to experience a candy I'm familiar with that has an entirely new texture, says the Toronto resident. Ms West is describing one of the main features of this spin-off candy that independent and major confectionary manufacturers have been releasing onto shelves, both online and offline, for the past three years. It's been largely a US phenomena, hence we'll use the US term candy, but for our UK readers, we're talking about sweets here.


Improved Sub-Visible Particle Classification in Flow Imaging Microscopy via Generative AI-Based Image Synthesis

Ozbulak, Utku, Cohrs, Michaela, Svilenov, Hristo L., Vankerschaver, Joris, De Neve, Wesley

arXiv.org Artificial Intelligence

Sub-visible particle analysis using flow imaging microscopy combined with deep learning has proven effective in identifying particle types, enabling the distinction of harmless components such as silicone oil from protein particles. However, the scarcity of available data and severe imbalance between particle types within datasets remain substantial hurdles when applying multi-class classifiers to such problems, often forcing researchers to rely on less effective methods. The aforementioned issue is particularly challenging for particle types that appear unintentionally and in lower numbers, such as silicone oil and air bubbles, as opposed to protein particles, where obtaining large numbers of images through controlled settings is comparatively straightforward. In this work, we develop a state-of-the-art diffusion model to address data imbalance by generating high-fidelity images that can augment training datasets, enabling the effective training of multi-class deep neural networks. We validate this approach by demonstrating that the generated samples closely resemble real particle images in terms of visual quality and structure. To assess the effectiveness of using diffusion-generated images in training datasets, we conduct large-scale experiments on a validation dataset comprising 500,000 protein particle images and demonstrate that this approach improves classification performance with no negligible downside. Finally, to promote open research and reproducibility, we publicly release both our diffusion models and the trained multi-class deep neural network classifiers, along with a straightforward interface for easy integration into future studies, at https://github.com/utkuozbulak/svp-generative-ai.


HistoART: Histopathology Artifact Detection and Reporting Tool

Kahaki, Seyed, Webber, Alexander R., Zamzmi, Ghada, Subbaswamy, Adarsh, Deshpande, Rucha, Badano, Aldo

arXiv.org Artificial Intelligence

In modern cancer diagnostics, Whole Slide Imaging (WSI) is widely used to digitize tissue specimens for detailed, high-resolution examination; however, other diagnostic approaches, such as liquid biopsy and molecular testing, are also utilized based on the cancer type and clinical context. While WSI has revolutionized digital histopathology by enabling automated, precise analysis, it remains vulnerable to artifacts introduced during slide preparation and scanning. These artifacts can compromise downstream image analysis. To address this challenge, we propose and compare three robust artifact detection approaches for WSIs: (1) a foundation model-based approach (FMA) using a fine-tuned Unified Neural Image (UNI) architecture, (2) a deep learning approach (DLA) built on a ResNet50 backbone, and (3) a knowledge-based approach (KBA) leveraging handcrafted features from texture, color, and frequency-based metrics. The methods target six common artifact types: tissue folds, out-of-focus regions, air bubbles, tissue damage, marker traces, and blood contamination. Evaluations were conducted on 50,000+ image patches from diverse scanners (Hamamatsu, Philips, Leica Aperio AT2) across multiple sites. The FMA achieved the highest patch-wise AUROC of 0.995 (95% CI [0.994, 0.995]), outperforming the ResNet50-based method (AUROC: 0.977, 95% CI [0.977, 0.978]) and the KBA (AUROC: 0.940, 95% CI [0.933, 0.946]). To translate detection into actionable insights, we developed a quality report scorecard that quantifies high-quality patches and visualizes artifact distributions.


Vision Transformers for Small Histological Datasets Learned through Knowledge Distillation

Kanwal, Neel, Eftestol, Trygve, Khoraminia, Farbod, Zuiverloon, Tahlita CM, Engan, Kjersti

arXiv.org Artificial Intelligence

Computational Pathology (CPATH) systems have the potential to automate diagnostic tasks. However, the artifacts on the digitized histological glass slides, known as Whole Slide Images (WSIs), may hamper the overall performance of CPATH systems. Deep Learning (DL) models such as Vision Transformers (ViTs) may detect and exclude artifacts before running the diagnostic algorithm. A simple way to develop robust and generalized ViTs is to train them on massive datasets. Unfortunately, acquiring large medical datasets is expensive and inconvenient, prompting the need for a generalized artifact detection method for WSIs. In this paper, we present a student-teacher recipe to improve the classification performance of ViT for the air bubbles detection task. ViT, trained under the student-teacher framework, boosts its performance by distilling existing knowledge from the high-capacity teacher model. Our best-performing ViT yields 0.961 and 0.911 F1-score and MCC, respectively, observing a 7% gain in MCC against stand-alone training. The proposed method presents a new perspective of leveraging knowledge distillation over transfer learning to encourage the use of customized transformers for efficient preprocessing pipelines in the CPATH systems.


Look away now, vegans! Scientists find plants produce ALARM SOUNDS after being cut

Daily Mail - Science & tech

The idea of a plant making noises may evoke a vision of the mandrakes from Harry Potter. But a new study suggests that plants really do produce distress calls when they do not get enough water. They also appear to produce alarm sounds after being cut, with these noises found to come from tomato and tobacco plants, as well as corn and the grapevines used to make Cabernet Sauvignon. Ultrasonic vibrations have been recorded from plants previously, using sensors directly touching them. Now the new study provides the first evidence that plants emit airborne sounds, which researchers estimate could be heard by animals with sharp hearing like mice and moths from up to 16 feet (five metres) away.


MRI-powered Magnetic Miniature Capsule Robot with HIFU-controlled On-demand Drug Delivery

Tiryaki, Mehmet Efe, Dogangun, Fatih, Dayan, Cem Balda, Wrede, Paul, Sitti, Metin

arXiv.org Artificial Intelligence

Magnetic resonance imaging (MRI)-guided robotic systems offer great potential for new minimally invasive medical tools, including MRI-powered miniature robots. By re-purposing the imaging hardware of an MRI scanner, the magnetic miniature robot could be navigated into the remote part of the patient's body without needing tethered endoscopic tools. However, the state-of-art MRI-powered magnetic miniature robots have limited functionality besides navigation. Here, we propose an MRI-powered magnetic miniature capsule robot benefiting from acoustic streaming forces generated by MRI-guided high-intensity focus ultrasound (HIFU) for controlled drug release. Our design comprises a polymer capsule shell with a submillimeter-diameter drug-release hole that captures an air bubble functioning as a stopper. We use the HIFU pulse to initiate drug release by removing the air bubble once the capsule robot reaches the target location. By controlling acoustic pressure, we also regulate the drug release rate for multiple location targeting during navigation. We demonstrated that the proposed magnetic capsule robot could travel at high speed up to 1.13 cm/s in ex vivo porcine small intestine and release drug to multiple target sites in a single operation, using a combination of MRI-powered actuation and HIFU-controlled release. The proposed MRI-guided microrobotic drug release system will greatly impact minimally invasive medical procedures by allowing on-demand targeted drug delivery.


Giving bug-like bots a boost

Robohub

MIT researchers have pioneered a new fabrication technique that enables them to produce low-voltage, power-dense, high endurance soft actuators for an aerial microrobot. When it comes to robots, bigger isn't always better. Someday, a swarm of insect-sized robots might pollinate a field of crops or search for survivors amid the rubble of a collapsed building. MIT researchers have demonstrated diminutive drones that can zip around with bug-like agility and resilience, which could eventually perform these tasks. The soft actuators that propel these microrobots are very durable, but they require much higher voltages than similarly-sized rigid actuators.


Microscopic 'swimming robots' inspired by sperm cells developed to bring drugs to parts of the body

Daily Mail - Science & tech

Researchers have designed miniature robots that are inspired by cells and steered by ultrasound that could one day navigate the human body and help deliver drugs to certain parts of it. These'rocket ships,' as described by scientists at Cornell University, have a design that is inspired by both bacteria and sperm cells. The robots, which could navigate through the human body are controlled remotely and could take advantage of some features of sperm and bacteria cells, including the fact that bacteria can swim 10 times their body length and sperm can go against the flow. 'We can make airplanes that are better than birds nowadays,' said study co-author, Mingming Wu, professor of biological and environmental engineering at Cornell, in a statement. 'But at the smallest scale, there are many situations that nature is doing much better than us.

  Country: North America > United States > Maryland (0.06)
  Genre: Research Report > New Finding (0.37)
  Industry: Health & Medicine (1.00)

A multi-task U-net for segmentation with lazy labels

Ke, Rihuan, Bugeau, Aurélie, Papadakis, Nicolas, Schuetz, Peter, Schönlieb, Carola-Bibiane

arXiv.org Machine Learning

The need for labour intensive pixel-wise annotation is a major limitation of many fully supervised learning methods for image segmentation. In this paper, we propose a deep convolutional neural network for multi-class segmentation that circumvents this problem by being trainable on coarse data labels combined with only a very small number of images with pixel-wise annotations. We call this new labelling strategy 'lazy' labels. Image segmentation is then stratified into three connected tasks: rough detection of class instances, separation of wrongly connected objects without a clear boundary, and pixel-wise segmentation to find the accurate boundaries of each object. These problems are integrated into a multitask learning framework and the model is trained end-to-end in a semi-supervised fashion. The method is applied on a dataset of food microscopy images. We show that the model gives accurate segmentation results even if exact boundary labels are missing for a majority of the annotated data. This allows more flexibility and efficiency for training deep neural networks that are data hungry in a practical setting where manual annotation is expensive, by collecting more lazy (rough) annotations than precisely segmented images.