Goto

Collaborating Authors

 ink


World's smallest 'bioprinter' is the size of a pill

Popular Science

World's smallest'bioprinter' is the size of a pill The ingestible device could help patients heal from the inside. This magnet guided'ingestible bioprinter' is the size of a large pill. Breakthroughs, discoveries, and DIY tips sent every weekday. When someone hears the word " bioprinter," it likely conjures up images of bulky hardware buzzing loudly on a desk in a brightly lit laboratory. But researchers from the École polytechnique fédérale de Lausanne (EPFL) School of Engineering are now turning that image on its head with the creation of what they are calling the world's first, pill-sized "ingestible bioprinter."


We're finally reading the secrets of Herculaneum's lost library

New Scientist

We're finally reading the secrets of Herculaneum's lost library A whole library's worth of papyri owned by Julius Caesar's father-in-law were turned to charcoal by the eruption of Vesuvius. Deep within a particle accelerator, theoretical physicist Giorgio Angelotti is hard at work. He sets a black cylinder on a mount, bolts it down, then runs through some safety checks before retreating from the chamber, known as "the hatch". "You have to be sure there's no one in the hatch before you close the door," he says. That's because he is about to blast the sample with a super-powerful beam of X-rays.


Will an AI machine change tattoo art forever?

FOX News

Fox News chief political anchor Bret Baier investigates concerns that artificial intelligence is becoming too advanced on'Special Report.' Every tattoo starts with a single black dot. That tiny mark is the base for every design, no matter how complex. And now, thanks to a new AI tattoo machine, that dot is more perfect than ever. Welcome to the future of tattooing.

  Country: North America > United States > New York (0.05)
  Industry: Media > News (0.53)

Ink over email: Why handwritten notes still win in business

FOX News

Why is it that we still get a tiny thrill from checking the mailbox each day? Rationally, we know what's in there: bills we don't want, catalogs we never ordered, and that bulky Valpak stuffed with coupons we'll never use. But somehow, despite the noise, there's a quiet hope we might find something meaningful. And every once in a while, we do. In a society obsessed with social media, texts, AI, speed and automation, the handwritten thank-you note has become an endangered species.


Fabrication and Characterization of Additively Manufactured Stretchable Strain Sensors Towards the Shape Sensing of Continuum Robots

Moyer, Daniel C., Wang, Wenpeng, Karschner, Logan S., Fichera, Loris, Rao, Pratap M.

arXiv.org Artificial Intelligence

This letter describes the manufacturing and experimental characterization of novel stretchable strain sensors for continuum robots. The overarching goal of this research is to provide a new solution for the shape sensing of these devices. The sensors are fabricated via direct ink writing, an extrusion-based additive manufacturing technique. Electrically conductive material (i.e., the \textit{ink}) is printed into traces whose electrical resistance varies in response to mechanical deformation. The principle of operation of stretchable strain sensors is analogous to that of conventional strain gauges, but with a significantly larger operational window thanks to their ability to withstand larger strain. Among the different conductive materials considered for this study, we opted to fabricate the sensors with a high-viscosity eutectic Gallium-Indium ink, which in initial testing exhibited high linearity ($R^2 \approx$ 0.99), gauge factor $\approx$ 1, and negligible drift. Benefits of the proposed sensors include (i) ease of fabrication, as they can be conveniently printed in a matter of minutes; (ii) ease of installation, as they can simply be glued to the outside body of a robot; (iii) ease of miniaturization, which enables integration into millimiter-sized continuum robots.


Peer inside the Herculaneum scroll for the first time in 2,000 years: Scientists use AI to virtually unfurl a 'badly burnt' manuscript that was charred during the eruption of Mount Vesuvius

Daily Mail - Science & tech

It's been left unread for nearly 2,000 years, last glimpsed when the Roman Empire ruled over Europe. Now, scientists have used AI to virtually unfurl one of the Herculaneum scrolls – the ancient documents buried by the eruption of Mount Vesuvius in AD 79. 'It's an incredible moment in history as librarians, computer scientists and scholars of the classical period are collaborating to see the unseen,' said Richard Ovenden, senior executive Bodleian Libraries. 'The astonishing strides forward made with imaging, and AI are enabling us to look inside scrolls that have not been read for almost 2,000 years.' The Herculaneum scrolls are thought to contain profound philosophical and literary texts from ancient Greek and Roman scholars. The problem is that any attempts to unroll the burnt cylinders will turn them to dust because they are so fragile – meaning the words would be lost forever.


Low-Cost 3D printed, Biocompatible Ionic Polymer Membranes for Soft Actuators

Trümpler, Nils, Kanno, Ryo, David, Niu, Huch, Anja, Nguyen, Pham Huy, Jurinovs, Maksims, Nyström, Gustav, Gaidukovs, Sergejs, Kovac, Mirko

arXiv.org Artificial Intelligence

Ionic polymer actuators, in essence, consist of ion exchange polymers sandwiched between layers of electrodes. They have recently gained recognition as promising candidates for soft actuators due to their lightweight nature, noise-free operation, and low-driving voltages. However, the materials traditionally utilized to develop them are often not human/environmentally friendly. Thus, to address this issue, researchers have been focusing on developing biocompatible versions of this actuator. Despite this, such actuators still face challenges in achieving high performance, in payload capacity, bending capabilities, and response time. In this paper, we present a biocompatible ionic polymer actuator whose membrane is fully 3D printed utilizing a direct ink writing method. The structure of the printed membranes consists of biodegradable ionic fluid encapsulated within layers of activated carbon polymers. From the microscopic observations of its structure, we confirmed that the ionic polymer is well encapsulated. The actuators can achieve a bending performance of up to 124$^\circ$ (curvature of 0.82 $\text{cm}^{-1}$), which, to our knowledge, is the highest curvature attained by any bending ionic polymer actuator to date. It can operate comfortably up to a 2 Hz driving frequency and can achieve blocked forces of up to 0.76 mN. Our results showcase a promising, high-performing biocompatible ionic polymer actuator, whose membrane can be easily manufactured in a single step using a standard FDM 3D printer. This approach paves the way for creating customized designs for functional soft robotic applications, including human-interactive devices, in the near future.


MathWriting: A Dataset For Handwritten Mathematical Expression Recognition

Gervais, Philippe, Fadeeva, Asya, Maksai, Andrii

arXiv.org Artificial Intelligence

Online text recognition models have improved a lot in the past few years, because of improvements in model structure and also because of bigger datasets. Mathematical expression (ME) recognition is a more complex task that has not received as much attention. However, the problem is different from text recognition in a number of interesting ways which can prevent improvements on one transfering to the other. Though MEs share with text most of their symbols, they follow a more rigid structure which is also two-dimensional. Where text can be treated to some extent as a one-dimensional problem amenable to sequence modeling, MEs cannot, because the relative position of symbols in space is meaningful.


Representing Online Handwriting for Recognition in Large Vision-Language Models

Fadeeva, Anastasiia, Schlattner, Philippe, Maksai, Andrii, Collier, Mark, Kokiopoulou, Efi, Berent, Jesse, Musat, Claudiu

arXiv.org Artificial Intelligence

The adoption of tablets with touchscreens and styluses is increasing, and a key feature is converting handwriting to text, enabling search, indexing, and AI assistance. Meanwhile, vision-language models (VLMs) are now the go-to solution for image understanding, thanks to both their state-of-the-art performance across a variety of tasks and the simplicity of a unified approach to training, fine-tuning, and inference. While VLMs obtain high performance on image-based tasks, they perform poorly on handwriting recognition when applied naively, i.e., by rendering handwriting as an image and performing optical character recognition (OCR). In this paper, we study online handwriting recognition with VLMs, going beyond naive OCR. We propose a novel tokenized representation of digital ink (online handwriting) that includes both a time-ordered sequence of strokes as text, and as image. We show that this representation yields results comparable to or better than state-of-the-art online handwriting recognizers. Wide applicability is shown through results with two different VLM families, on multiple public datasets. Our approach can be applied to off-the-shelf VLMs, does not require any changes in their architecture, and can be used in both fine-tuning and parameter-efficient tuning. We perform a detailed ablation study to identify the key elements of the proposed representation.


DSS: Synthesizing long Digital Ink using Data augmentation, Style encoding and Split generation

Timofeev, Aleksandr, Fadeeva, Anastasiia, Afonin, Andrei, Musat, Claudiu, Maksai, Andrii

arXiv.org Artificial Intelligence

As text generative models can give increasingly long answers, we tackle the problem of synthesizing long text in digital ink. We show that the commonly used models for this task fail to generalize to long-form data and how this problem can be solved by augmenting the training data, changing the model architecture and the inference procedure. These methods use contrastive learning technique and are tailored specifically for the handwriting domain. They can be applied to any encoder-decoder model that works with digital ink. We demonstrate that our method reduces the character error rate on long-form English data by half compared to baseline RNN and by 16% compared to the previous approach that aims at addressing the same problem. We show that all three parts of the method improve recognizability of generated inks. In addition, we evaluate synthesized data in a human study and find that people perceive most of generated data as real.