carp
Representation Learning via Consistent Assignment of Views over Random Partitions
CARP learns prototypes in an end-to-end online fashion using gradient descent without additional non-differentiable modules to solve the cluster assignment problem. CARP optimizes a new pretext task based on random partitions of prototypes that regularizes the model and enforces consistency between views' assignments.
- North America > Canada > Ontario > Toronto (0.14)
- South America > Brazil (0.04)
- Europe > Norway > Eastern Norway > Oslo (0.04)
- North America > United States (0.14)
- Asia > China > Beijing > Beijing (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)
- Research Report (0.68)
- Workflow (0.46)
Representation Learning via Consistent Assignment of Views over Random Partitions
CARP learns prototypes in an end-to-end online fashion using gradient descent without additional non-differentiable modules to solve the cluster assignment problem. CARP optimizes a new pretext task based on random partitions of prototypes that regularizes the model and enforces consistency between views' assignments. Additionally, our method improves training stability and prevents collapsed solutions in joint-embedding training. Through an extensive evaluation, we demonstrate that CARP's representations are suitable for learning downstream tasks. We evaluate CARP's representations capabilities in 17 datasets across many standard protocols, including linear evaluation, few-shot classification, $k$-NN, $k$-means, image retrieval, and copy detection. We compare CARP performance to 11 existing self-supervised methods. We extensively ablate our method and demonstrate that our proposed random partition pretext task improves the quality of the learned representations by devising multiple random classification tasks.In transfer learning tasks, CARP achieves the best performance on average against many SSL methods trained for a longer time.
Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning
Chain-of-thought prompting (CoT) and tool augmentation have been validated in recent work as effective practices for improving large language models (LLMs) to perform step-by-step reasoning on complex math-related tasks.However, most existing math reasoning datasets may not be able to fully evaluate and analyze the ability of LLMs in manipulating tools and performing reasoning, as they often only require very few invocations of tools or miss annotations for evaluating intermediate reasoning steps, thus supporting only outcome evaluation.To address the issue, we construct CARP
CARPAS: Towards Content-Aware Refinement of Provided Aspects for Summarization in Large Language Models
Tian, Yong-En, Tang, Yu-Chien, Yen, An-Zi, Peng, Wen-Chih
Aspect-based summarization has attracted significant attention for its ability to generate more fine-grained and user-aligned summaries. While most existing approaches assume a set of predefined aspects as input, real-world scenarios often present challenges where these given aspects may be incomplete, irrelevant, or entirely missing from the document. Users frequently expect systems to adaptively refine or filter the provided aspects based on the actual content. In this paper, we initiate this novel task setting, termed Content-Aware Refinement of Provided Aspects for Summarization (CARPAS), with the aim of dynamically adjusting the provided aspects based on the document context before summarizing. We construct three new datasets to facilitate our pilot experiments, and by using LLMs with four representative prompting strategies in this task, we find that LLMs tend to predict an overly comprehensive set of aspects, which often results in excessively long and misaligned summaries. Building on this observation, we propose a preliminary subtask to predict the number of relevant aspects, and demonstrate that the predicted number can serve as effective guidance for the LLMs, reducing the inference difficulty, and enabling them to focus on the most pertinent aspects. Our extensive experiments show that the proposed approach significantly improves performance across all datasets. Moreover, our deeper analyses uncover LLMs' compliance when the requested number of aspects differs from their own estimations, establishing a crucial insight for the deployment of LLMs in similar real-world applications.
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > China > Heilongjiang Province > Daqing (0.04)
- Health & Medicine > Therapeutic Area > Immunology (0.48)
- Health & Medicine > Therapeutic Area > Vaccines (0.46)
- North America > Canada > Ontario > Toronto (0.14)
- South America > Brazil (0.04)
- Europe > Norway > Eastern Norway > Oslo (0.04)
- North America > United States (0.14)
- Asia > China > Beijing > Beijing (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)
- Research Report (0.68)
- Workflow (0.46)
This painting uses leather from an invasive Burmese python
Breakthroughs, discoveries, and DIY tips sent every weekday. Fine artist Laura Shape uses quite an unexpected medium in her visual artwork. It lends striking patterns to her abstract canvases, while helping restore rivers, reefs, and wetlands. Shape uses the leather of invasive species--specifically lionfish, carp, and Burmese pythons. "I use those materials to make vibrant, textured, abstract acrylic pieces," she tells Popular Science via video call.
- North America > United States > Florida (0.06)
- North America > United States > Colorado > Denver County > Denver (0.06)
- North America > Mexico (0.06)
- (4 more...)
Dragging dead fish around reveals super power of mucus
By dragging a bunch of dead fish around, scientists may have uncovered a hidden power of one of biology's most important substances--mucus. And what they found might even help us understand the very dawn of vertebrate life on land. First, it's important to know that fish are covered in a thin layer of mucus. This slimy coating (it is also called a "slime coat") is known to keep fish healthy by warding off pathogens. Scientists have also found some evidence that mucus can reduce drag, helping fish swim through the water more easily.
- North America > United States > Maryland (0.06)
- North America > United States > New York > Kings County > New York City (0.05)
- Asia (0.05)
Representation Learning via Consistent Assignment of Views over Random Partitions
CARP learns prototypes in an end-to-end online fashion using gradient descent without additional non-differentiable modules to solve the cluster assignment problem. CARP optimizes a new pretext task based on random partitions of prototypes that regularizes the model and enforces consistency between views' assignments. Additionally, our method improves training stability and prevents collapsed solutions in joint-embedding training. Through an extensive evaluation, we demonstrate that CARP's representations are suitable for learning downstream tasks. We evaluate CARP's representations capabilities in 17 datasets across many standard protocols, including linear evaluation, few-shot classification, k -NN, k -means, image retrieval, and copy detection.