Goto

Collaborating Authors

 gill


Sardine-inspired washing machine filter removes 99% of microplastics

Popular Science

The home appliance can easily generate as much as 500 grams of microplastics each year. Breakthroughs, discoveries, and DIY tips sent every weekday. Fish gills may inspire an unexpected solution to one of our biggest sources of microplastics . According to researchers at Germany's University of Bonn, taking a cue from the animals' filtration systems might help remove the vast majority of harmful plastic particulates from washing machine wastewater. Microplastics are a huge problem.






A googly-eyed fish could upend evolutionary history

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. Using advanced imaging techniques, an international research team has reconstructed an ancient extinct fish's heart, brain, and fins from an intricately detailed, fingernail-sized fossil fragment. But cartoon lookalikes aside, the creature may help rewrite one of the earliest chapters in animal evolution. Its details are described in a study published on August 6 in Nature. Earth's first fish arrived about half a billion years ago, but not anywhere near the ocean's surface.


LLMs can learn self-restraint through iterative self-reflection

Piché, Alexandre, Milios, Aristides, Bahdanau, Dzmitry, Pal, Chris

arXiv.org Artificial Intelligence

In order to be deployed safely, Large Language Models (LLMs) must be capable of dynamically adapting their behavior based on their level of knowledge and uncertainty associated with specific topics. This adaptive behavior, which we refer to as self-restraint, is non-trivial to teach since it depends on the internal knowledge of an LLM. By default, LLMs are trained to maximize the next token likelihood, which does not teach the model to modulate its answer based on its level of uncertainty. In order to learn self-restraint, we devise a utility function that can encourage the model to produce responses only when it is confident in them. This utility function can be used to score generation of different length and abstention. To optimize this function, we introduce ReSearch, a process of "self-reflection" consisting of iterative self-prompting and self-evaluation. We use the ReSearch algorithm to generate synthetic data on which we finetune our models. Compared to their original versions, our resulting models generate fewer \emph{hallucinations} overall at no additional inference cost, for both known and unknown topics, as the model learns to selectively restrain itself. In addition, our method elegantly incorporates the ability to abstain by augmenting the samples generated by the model during the search procedure with an answer expressing abstention.


What your favourite horror classics would look like as modern monsters this Halloween, according to AI - from Pumpkinhead to Nosferatu

Daily Mail - Science & tech

If there's one thing that truly scares horror fans, it's a modern reboot of a beloved franchise. However, while those ageing terrors might have their charms, they don't quite match up to the fear factor of modern monsters. Now, AI has been used to reimagine what some of our favourite on-screen spooks might look like with modern film-making techniques. According to the AI, the Alien Queen from Aliens would be sleeker and shinier than her predecessor. Meanwhile, Stripe from Gremlins would be absolutely terrifying, with huge eyes - and enormous fangs to match.


What the U.N.'s AI Advisory Group Will Do

TIME - Tech

U.N. Secretary-General António Guterres unveiled Thursday a new advisory body dedicated to developing consensus around the risks posed by artificial intelligence and how international cooperation can help meet those challenges. While the body will have little power itself, its recommendations could decide the form and function of a U.N. agency for the governance of AI, an organization that many believe will be required as the world confronts the complex set of issues the technology will pose. "Without entering into a host of doomsday scenarios, it is already clear that the malicious use of AI could undermine trust in institutions, weaken social cohesion, and threaten democracy itself," said Guterres, speaking at the announcement press conference. "For all these reasons, I have called for a global multidisciplinary, multistakeholder conversation on the governance of AI, so that its benefits for all of humanity are maximized, and the risks contained are diminished." The new group is one of several international AI initiatives already underway, including the upcoming U.K. AI Safety Summit and G7 AI code of conduct.


Generating Images with Multimodal Language Models

Koh, Jing Yu, Fried, Daniel, Salakhutdinov, Ruslan

arXiv.org Artificial Intelligence

We propose a method to fuse frozen text-only large language models (LLMs) with pre-trained image encoder and decoder models, by mapping between their embedding spaces. Our model demonstrates a wide suite of multimodal capabilities: image retrieval, novel image generation, and multimodal dialogue. Ours is the first approach capable of conditioning on arbitrarily interleaved image and text inputs to generate coherent image (and text) outputs. To achieve strong performance on image generation, we propose an efficient mapping network to ground the LLM to an off-the-shelf text-to-image generation model. This mapping network translates hidden representations of text into the embedding space of the visual models, enabling us to leverage the strong text representations of the LLM for visual outputs. Our approach outperforms baseline generation models on tasks with longer and more complex language. In addition to novel image generation, our model is also capable of image retrieval from a prespecified dataset, and decides whether to retrieve or generate at inference time. This is done with a learnt decision module which conditions on the hidden representations of the LLM. Our model exhibits a wider range of capabilities compared to prior multimodal language models. It can process image-and-text inputs, and produce retrieved images, generated images, and generated text -- outperforming non-LLM based generation models across several text-to-image tasks that measure context dependence.