Goto

Collaborating Authors

 leopard



Leopards may have feasted on our earliest ancestors

Popular Science

It took a while for humans to climb the food chain. Breakthroughs, discoveries, and DIY tips sent every weekday. Most paleobiologists believe humanity truly began around 2 million years ago with a species known as . Part of this evolutionary demarcation stems from the theory that the early hominins were some of the first primates to consistently shift from the role of "prey" to that of "predator." But according to an analysis of tiny injuries on two fossilized jaw fragments, some researchers now believe our ancestors required a bit more time to ascend the food chain.


Leveraging Hierarchical Taxonomies in Prompt-based Continual Learning

Tran, Quyen, Phan, Hoang, Le, Minh, Truong, Tuan, Phung, Dinh, Ngo, Linh, Nguyen, Thien, Ho, Nhat, Le, Trung

arXiv.org Artificial Intelligence

Drawing inspiration from human learning behaviors, this work proposes a novel approach to mitigate catastrophic forgetting in Prompt-based Continual Learning models by exploiting the relationships between continuously emerging class data. We find that applying human habits of organizing and connecting information can serve as an efficient strategy when training deep learning models. Specifically, by building a hierarchical tree structure based on the expanding set of labels, we gain fresh insights into the data, identifying groups of similar classes could easily cause confusion. Additionally, we delve deeper into the hidden connections between classes by exploring the original pretrained model's behavior through an optimal transport-based approach. From these insights, we propose a novel regularization loss function that encourages models to focus more on challenging knowledge areas, thereby enhancing overall performance. Experimentally, our method demonstrated significant superiority over the most robust state-of-the-art models on various benchmarks.


Redefining in Dictionary: Towards an Enhanced Semantic Understanding of Creative Generation

Feng, Fu, Xie, Yucheng, Yang, Xu, Wang, Jing, Geng, Xin

arXiv.org Artificial Intelligence

Given the challenge atively generated using . Furthermore, this that diffusion models face in directly generating creativity, meta-creativity enables direct concept combinations without existing methods typically rely on synthesizing reference requiring additional training, much like generating "a prompts or images to achieve creative effects. This significantly reduces both time and computational instance, to combine "Lettuce" and "Mantis" creatively, complexity compared to state-of-the-art (SOTA) ConceptLab [43] merges tokens representing these concepts creative generation methods, such as ConceptLab [43] (4s into a new composite token, while BASS [22] uses predefined vs. 120s per image, 30 speedup) and BASS [22] (4s vs. sampling rules to search for creative outcomes from a 2400s per image, 600 speedup), while maintaining linguistic large pool of candidate images. Further each generation, which leads to high computational costs evaluations using GPT-4o [1] and user studies indicate superior and limited practicality for online applications. In contrast, performance of CreTok in terms of integration, originality, "a blue banana" can be generated directly without additional and aesthetics, underscoring its effectiveness in fostering training, due to its clear and concrete semantics, especially combinatorial creativity. Inspired by this, we may Our contributions are as follows: (1) We propose Cre-ask: Can we awaken the creativity of diffusion models by Tok, a method designed to enhance models' meta-ability enhancing their semantic understanding of "creative"? To by enabling a enhanced understanding of abstract and ambiguous achieve this, we propose CreTok, which redefines "creative" adjectives (e.g., "creative" or "beautiful") through as a new specialized token, , allowing it their redefinition as new tokens. This redefinition we redefine the abstract term "creative" within our proposed enhances the model's semantic understanding for CangJie dataset for the TP2O task, and introduce combinatorial creativity, as shown in Figure 1c. Specifically, text-to-image (T2I) models and creative generation methods CreTok builds on the definition of "creativity" from in terms of computational complexity, human preference the TP2O task [22] for combinatorial object generation, ratings, text-image alignment, and other key metrics. ") and human-like creativity, a critical yet underexplored aspect an adaptive prompt (e.g., "A photo of a mixture"). of AI research [28, 29].


The Fake Fake-News Problem and the Truth About Misinformation

The New Yorker

Millions of people have watched Mike Hughes die. It happened on February 22, 2020, not far from Highway 247 near the Mojave Desert city of Barstow, California. A homemade rocket ship with Hughes strapped in it took off from a launching pad mounted on a truck. A trail of steam billowed behind the rocket as it swerved and then shot upward, a detached parachute unfurling ominously in its wake. In a video recorded by the journalist Justin Chapman, Hughes disappears into the sky, a dark pinpoint in a vast, uncaring blueness.


How the leopard got its spots: Age-old question of how animals develop their patterns may have finally been solved - with the aid of British computer pioneer Alan Turing

Daily Mail - Science & tech

From spotty leopards to stripy zebras, nature has no shortage of distinct patterns on animals and plants. Now, the age-old question of how these patterns developed may have finally been solved. Scientists have shown that the same physical process that helps remove dirt from laundry could play a role in how tropical fish get their colourful spots and stripes. For their study, the team at the University of Colorado Boulder drew on the groundbreaking work of British computer pioneer Alan Turing, dating back more than 70 years. They believe their findings could help develop new materials and even new drugs.


Revealed: The biggest animal the average human could beat in a fight, according to AI - so, do you agree?

Daily Mail - Science & tech

It's a question that regularly comes up after a few drinks in the pub: what's the biggest animal you think you could beat in a fight? While many people have conservative answers, others reckon they could take on huge creatures. To settle the debate once and for all, MailOnline turned to everyone's favourite AI bot, ChatGPT. The bot claims that a'well-prepared' person would stand a chance against large dog, a wild boar, or even a leopard. However, it adds that'attempting to fight any animal is highly risky and not advisable.'


Speech-to-Text using JavaScript

#artificialintelligence

Learn how to automatically transcribe speech to text using Picovoice Leopard Speech-to-Text Web SDK. The SDK runs on all modern browsers. If you are looking for a speech-to-text engine in Node.js, you might want to check the Speech-to-Text using Node.js The SpeechRecognition interface of Web Speech API is freely available. SpeechRecognition is not yet supported across all browsers and has (undocumented) usage limitations.


Self-Taught AI May Have a Lot in Common With the Human Brain

WIRED

For a decade now, many of the most impressive artificial intelligence systems have been taught using a huge inventory of labeled data. An image might be labeled "tabby cat" or "tiger cat," for example, to "train" an artificial neural network to correctly distinguish a tabby from a tiger. The strategy has been both spectacularly successful and woefully deficient. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. Such "supervised" training requires data laboriously labeled by humans, and the neural networks often take shortcuts, learning to associate the labels with minimal and sometimes superficial information.


Autonomous Cross Domain Adaptation under Extreme Label Scarcity

Weng, Weiwei, Pratama, Mahardhika, Za'in, Choiru, De Carvalho, Marcus, Appan, Rakaraddi, Ashfahani, Andri, Yee, Edward Yapp Kien

arXiv.org Artificial Intelligence

A cross domain multistream classification is a challenging problem calling for fast domain adaptations to handle different but related streams in never-ending and rapidly changing environments. Notwithstanding that existing multistream classifiers assume no labelled samples in the target stream, they still incur expensive labelling cost since they require fully labelled samples of the source stream. This paper aims to attack the problem of extreme label shortage in the cross domain multistream classification problems where only very few labelled samples of the source stream are provided before process runs. Our solution, namely Learning Streaming Process from Partial Ground Truth (LEOPARD), is built upon a flexible deep clustering network where its hidden nodes, layers and clusters are added and removed dynamically in respect to varying data distributions. A deep clustering strategy is underpinned by a simultaneous feature learning and clustering technique leading to clustering-friendly latent spaces. A domain adaptation strategy relies on the adversarial domain adaptation technique where a feature extractor is trained to fool a domain classifier classifying source and target streams. Our numerical study demonstrates the efficacy of LEOPARD where it delivers improved performances compared to prominent algorithms in 15 of 24 cases. Source codes of LEOPARD are shared in \url{https://github.com/wengweng001/LEOPARD.git} to enable further study.