neglect
GenEOL: Harnessing the Generative Power of LLMs for Training-Free Sentence Embeddings
Thirukovalluru, Raghuveer, Dhingra, Bhuwan
Training-free embedding methods directly leverage pretrained large language models (LLMs) to embed text, bypassing the costly and complex procedure of contrastive learning. Previous training-free embedding methods have mainly focused on optimizing embedding prompts and have overlooked the benefits of utilizing the generative abilities of LLMs. We propose a novel method, GenEOL, which uses LLMs to generate diverse transformations of a sentence that preserve its meaning, and aggregates the resulting embeddings of these transformations to enhance the overall sentence embedding. GenEOL significantly outperforms the existing training-free embedding methods by an average of 2.85 points across several LLMs on the sentence semantic text similarity (STS) benchmark. Our analysis shows that GenEOL stabilizes representation quality across LLM layers and is robust to perturbations of embedding prompts. GenEOL also achieves notable gains on multiple clustering, reranking and pair-classification tasks from the MTEB benchmark.
- Asia > Singapore (0.04)
- Asia > Middle East > Jordan (0.04)
GPT's Judgements Under Uncertainty
Saeedi, Payam, Goodarzi, Mahsa
--We investigate the presence of cognitive biases in three large language models (LLMs): GPT -4o, Gemma 2, and Llama 3.1. The study uses 1,500 experiments across nine established cognitive biases to evaluate the responses and consistency of the models. GPT -4o demonstrated the strongest overall performance. Gemma 2 showed strengths in addressing the sunk cost fallacy and prospect theory; however, its performance varied across different biases. Llama 3.1 consistently underperformed, relying on heuristics and exhibiting frequent inconsistencies and contradictions. The findings highlight the challenges of achieving robust and generalizable reasoning in LLMs, and underscore the need for further development to mitigate biases in artificial general intelligence (AGI). The study emphasizes the importance of integrating statistical reasoning and ethical considerations in future AI development. Cognitive biases and heuristics are well-established phenomena of the human mind, shaping how individuals process information, make judgments, and make decisions. These biases emerge from heuristics -- mental shortcuts that simplify complex tasks by substituting them with cognitively easier alternatives [1]. While heuristics enable quick and efficient reasoning, they also introduce systematic errors that impact judgment and decision-making [2]-[4]. Understanding whether such biases, embedded in the data and interactions that shape Large Language Models (LLMs), are reflected in their outputs is not only critical for evaluating their alignment with human cognition but also vital for the development of Artificial General Intelligence (AGI). AGI, envisioned as systems capable of performing any intellectual task a human can, must navigate the intricacies of human-like reasoning while avoiding harmful or irresponsible biases.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Switzerland (0.04)
- Health & Medicine (0.46)
- Education (0.46)
Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models
Chefer, Hila, Alaluf, Yuval, Vinker, Yael, Wolf, Lior, Cohen-Or, Daniel
Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen - or excite - their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts.
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.05)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
Neural Basis of Object-Centered Representations
We present a neural model that can perform eye movements to a particular side of an object regardless of the position and orienta(cid:173) tion of the object in space, a generalization of a task which has been recently used by Olson and Gettner [4] to investigate the neu(cid:173) ral structure of object-centered representations. Our model uses an intermediate representation in which units have oculocentric recep(cid:173) tive fields- just like collicular neurons- whose gain is modulated by the side of the object to which the movement is directed, as well as the orientation of the object. We show that these gain modulations are consistent with Olson and Gettner's single cell recordings in the supplementary eye field. This demonstrates that it is possible to perform an object-centered task without a representation involv(cid:173) ing an object-centered map, viz., without neurons whose receptive fields are defined in object-centered coordinates. We also show that the same approach can account for object-centered neglect, a situ(cid:173) ation in which patients with a right parietal lesion neglect the left side of objects regardless of the orientation of the objects.
- Information Technology > Artificial Intelligence > Games (0.61)
- Information Technology > Communications > Social Media (0.46)
A.I. Artificial Intelligence shows us a future where we neglect to dream
The Verge is a place where you can consider the future. In Yesterday's Future, we revisit a movie about the future and consider the things it tells us about today, tomorrow, and yesterday. The future: A.I. begins with a brief summary of the sorry state of the world: climate change has melted the polar ice caps, wiping out coastal cities and severely reducing the human population. With regulations in place for reproduction on a resource-starved planet, corporations developed Mecha -- androids that appear human but lack emotions. They're seen as objects -- useful for labor or sex work, just human enough to not be strange but machine enough to not mistake them for people.
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
10 Tips For Effective Robotic Process Automation – Botware
Quick wins are possible with RPA, but propelling RPA to run at scale is a different animal. Dave Kuder, a principal with Deloitte Consulting LLP, says that many RPA hiccups stem from poor expectations management. Bold claims about RPA from vendors and implementation consultants haven't helped. That's why it's crucial for CIOs to go in with a cautiously optimistic mindset. "If you go in with open eyes you'll be a lot happier with the result," Kuder says.
Hundreds of Google employees urge company to resist support for Ice
Tech giant Google is facing a demand from hundreds of employees for an assurance that it will not bid on a government cloud computing contract that could be used to enforce US immigration policies on the southern border. A group of employees called Googlers for Human Rights posted a public petition overnight Thursday urging the company to resist tendering for a US Customs and Border Protection or Immigration and Customs Enforcement contract. It is not clear if Google or its parent Alphabet has already applied – the application deadline was 1 August – but the tech giant has previously drawn employee protests after signing cloud-computing or data storage deals with the government. The company confirmed in March 2018 that it was involved with Project Maven, a $250m Department of Defense artificial intelligence initiative designed to provide 3D mapping that could be used for improved drone-strike battlefield accuracy. Over 3,000 Google employees signed a petition in protest against the company's involvement.
Two Major Difficulties in AI and One Applied Solution
These are the heydays of AI. New and exciting applications are found on an almost daily basis, proving that the AI promise was not in vain. However, this success comes with a price, and I would like to highlight two such prices and one solution on the path to remedy. In the past two years, AI is suffering from an increasing problem of trust. ML and NN algorithms' lack of transparency, fairness, and safety[i] (AKA the black box), together with their weaknesses in accounting for specific situations impedes the desired adoption rate of AI algorithms and creates frustration in different domains of our modern life.