Goto

Collaborating Authors

 wool


Now woke scientists want to change the definition of WOOL in the dictionary to include plant-based alternatives

Daily Mail - Science & tech

Horrifying next twist in the Alexander brothers case: MAUREEN CALLAHAN exposes an unthinkable perversion that's been hiding in plain sight Alexander brothers' alleged HIGH SCHOOL gang rape video: Classmates speak out on sick'taking turns' footage... as creepy unseen photos emerge Model Cindy Crawford, 60, mocked for her'out of touch' morning routine: 'Nothing about this is normal' Kentucky mother and daughter turn down $26.5MILLION to sell their farms to secretive tech giant that wants to build data center there Live Nation executives mocked'stupid' concert-goers in emails where they bragged about how to best rip them off: '$60 for closer grass' NFL superstar Xavier Worthy spills all on Travis Kelce, the Chiefs' struggles... and having Taylor Swift as his No 1 fan Heartbreaking video shows very elderly DoorDash driver shuffle down customer's driveway with coffee order because he is too poor to retire Amber Valletta, 52, was a '90s Vogue model who made movies with Sandra Bullock and Kate Hudson, see her now Nancy Mace throws herself into Iran warzone as she goes rogue on Middle East rescue mission: 'I AM that person' Hidden toxins in kids' treats EXPOSED: Health guru Jillian Michaels' sit-down with Casey DeSantis reveals dangers lurking in popular foods READ MORE: People who don't like animals are likely to have dark personalities Dictionary entries for the word'wool' must be urgently updated to include plant-based alternatives, woke scientists say. For centuries, the term has been used to describe the soft, curly hair forming the fleecy coat of sheep and other animals. It even features prominently in nursery rhymes such as'Baa Baa Black Sheep', sung by children across the UK. But the Oxford English Dictionary's definition must be modernised to include plant-powered varieties that'leave sheep in peace', campaigners say. People for the Ethical Treatment of Animals (PETA) argues that wool derived from linen, hemp and bamboo has existed for centuries.


SCOPE: Language Models as One-Time Teacher for Hierarchical Planning in Text Environments

Lu, Haoye, Seshadri, Pavan, Suleman, Kaheer

arXiv.org Artificial Intelligence

Long-term planning in complex, text-based environments presents significant challenges due to open-ended action spaces, ambiguous observations, and sparse feedback. Recent research suggests that large language models (LLMs) encode rich semantic knowledge about the world, which can be valuable for guiding agents in high-level reasoning and planning across both embodied and purely textual settings. However, existing approaches often depend heavily on querying LLMs during training and inference, making them computationally expensive and difficult to deploy efficiently. In addition, these methods typically employ a pretrained, unaltered LLM whose parameters remain fixed throughout training, providing no opportunity for adaptation to the target task. To address these limitations, we introduce SCOPE (Subgoal-COnditioned Pretraining for Efficient planning), a one-shot hierarchical planner that leverages LLM-generated subgoals only at initialization to pretrain a lightweight student model. Unlike prior approaches that distill LLM knowledge by repeatedly prompting the model to adaptively generate subgoals during training, our method derives subgoals directly from example trajectories. This design removes the need for repeated LLM queries, significantly improving efficiency, though at the cost of reduced explainability and potentially suboptimal subgoals. Despite their suboptimality, our results on the TextCraft environment show that LLM-generated subgoals can still serve as a strong starting point for hierarchical goal decomposition in text-based planning tasks. Compared to the LLM-based hierarchical agent ADaPT (Prasad et al., 2024), which achieves a 0.52 success rate, our method reaches 0.56 and reduces inference time from 164.4 seconds to just 3.0 seconds.


Utilising Explainable Techniques for Quality Prediction in a Complex Textiles Manufacturing Use Case

Forsberg, Briony, Williams, Dr Henry, MacDonald, Prof Bruce, Chen, Tracy, Hamzeh, Dr Reza, Hulse, Dr Kirstine

arXiv.org Artificial Intelligence

This paper develops an approach to classify instances of product failure in a complex textiles manufacturing dataset using explainable techniques. The dataset used in this study was obtained from a New Zealand manufacturer of woollen carpets and rugs. In investigating the trade-off between accuracy and explainability, three different tree-based classification algorithms were evaluated: a Decision Tree and two ensemble methods, Random Forest and XGBoost. Additionally, three feature selection methods were also evaluated: the SelectKBest method, using chi-squared as the scoring function, the Pearson Correlation Coefficient, and the Boruta algorithm. Not surprisingly, the ensemble methods typically produced better results than the Decision Tree model. The Random Forest model yielded the best results overall when combined with the Boruta feature selection technique. Finally, a tree ensemble explaining technique was used to extract rule lists to capture necessary and sufficient conditions for classification by a trained model that could be easily interpreted by a human. Notably, several features that were in the extracted rule lists were statistical features and calculated features that were added to the original dataset. This demonstrates the influence that bringing in additional information during the data preprocessing stages can have on the ultimate model performance.


Logicbreaks: A Framework for Understanding Subversion of Rule-based Inference

Xue, Anton, Khare, Avishree, Alur, Rajeev, Goel, Surbhi, Wong, Eric

arXiv.org Artificial Intelligence

We study how to subvert language models from following the rules. We model rule-following as inference in propositional Horn logic, a mathematical system in which rules have the form "if $P$ and $Q$, then $R$" for some propositions $P$, $Q$, and $R$. We prove that although transformers can faithfully abide by such rules, maliciously crafted prompts can nevertheless mislead even theoretically constructed models. Empirically, we find that attacks on our theoretical models mirror popular attacks on large language models. Our work suggests that studying smaller theoretical models can help understand the behavior of large language models in rule-based settings like logical reasoning and jailbreak attacks.