Goto

Collaborating Authors

 laziness


'It shows such a laziness': why I refuse to date someone who uses ChatGPT

The Guardian

'OK, so ChatGPT helps you write your grocery list. Does your individual convenience outweigh the societal harm it can cause?' 'OK, so ChatGPT helps you write your grocery list. Does your individual convenience outweigh the societal harm it can cause?' 'It shows such a laziness': why I refuse to date someone who uses ChatGPT It's the ultimate ick: trying to form a deep, lasting connection with a person who outsources original thought The Guardian's journalism is independent. We will earn a commission if you buy something through an affiliate link. I t was a setting fit for a Nancy Meyers film.


Difficult Task Yes but Simple Task No: Unveiling the Laziness in Multimodal LLMs

Zhao, Sihang, Yuan, Youliang, Tang, Xiaoying, He, Pinjia

arXiv.org Artificial Intelligence

Multimodal Large Language Models (MLLMs) demonstrate a strong understanding of the real world and can even handle complex tasks. However, they still fail on some straightforward visual question-answering (VQA) problems. This paper dives deeper into this issue, revealing that models tend to err when answering easy questions (e.g. Yes/No questions) about an image, even though they can correctly describe it. We refer to this model behavior discrepancy between difficult and simple questions as model laziness. To systematically investigate model laziness, we manually construct LazyBench, a benchmark that includes Yes/No, multiple choice, short answer questions, and image description tasks that are related to the same subjects in the images. Based on LazyBench, we observe that laziness widely exists in current advanced MLLMs (e.g. GPT-4o, Gemini-1.5-pro, Claude 3 and LLaVA-v1.5-13B), and it is more pronounced on stronger models. We also analyze the VQA v2 (LLaVA-v1.5-13B) benchmark and find that about half of its failure cases are caused by model laziness, which further highlights the importance of ensuring that the model fully utilizes its capability. To this end, we conduct preliminary exploration on how to mitigate laziness and find that chain of thought (CoT) can effectively address this issue.


Accelerated Evaluation of Ollivier-Ricci Curvature Lower Bounds: Bridging Theory and Computation

Kang, Wonwoo, Park, Heehyun

arXiv.org Machine Learning

Curvature serves as a potent and descriptive invariant, with its efficacy validated both theoretically and practically within graph theory. We employ a definition of generalized Ricci curvature proposed by Ollivier, which Lin and Yau later adapted to graph theory, known as Ollivier-Ricci curvature (ORC). ORC measures curvature using the Wasserstein distance, thereby integrating geometric concepts with probability theory and optimal transport. Jost and Liu previously discussed the lower bound of ORC by showing the upper bound of the Wasserstein distance. We extend the applicability of these bounds to discrete spaces with metrics on integers, specifically hypergraphs. Compared to prior work on ORC in hypergraphs by Coupette, Dalleiger, and Rieck, which faced computational challenges, our method introduces a simplified approach with linear computational complexity, making it particularly suitable for analyzing large-scale networks. Through extensive simulations and application to synthetic and real-world datasets, we demonstrate the significant improvements our method offers in evaluating ORC.


What is going on with ChatGPT? Arwa Mahdawi

The Guardian > Technology

Sick and tired of having to work for a living? ChatGPT feels the same, apparently. Over the last month or so, there's been an uptick in people complaining that the chatbot has become lazy. Sometimes it just straight-up doesn't do the task you've set it. Other times it will stop halfway through whatever it's doing and you'll have to plead with it to keep going.


What is going on with ChatGPT? Arwa Mahdawi

The Guardian

Sick and tired of having to work for a living? ChatGPT feels the same, apparently. Over the last month or so, there's been an uptick in people complaining that the chatbot has become lazy. Sometimes it just straight-up doesn't do the task you've set it. Other times it will stop halfway through whatever it's doing and you'll have to plead with it to keep going.


The Virtues of Laziness in Model-based RL: A Unified Objective and Algorithms

Vemula, Anirudh, Song, Yuda, Singh, Aarti, Bagnell, J. Andrew, Choudhury, Sanjiban

arXiv.org Artificial Intelligence

We propose a novel approach to addressing two fundamental challenges in Model-based Reinforcement Learning (MBRL): the computational expense of repeatedly finding a good policy in the learned model, and the objective mismatch between model fitting and policy computation. Our "lazy" method leverages a novel unified objective, Performance Difference via Advantage in Model, to capture the performance difference between the learned policy and expert policy under the true dynamics. This objective demonstrates that optimizing the expected policy advantage in the learned model under an exploration distribution is sufficient for policy computation, resulting in a significant boost in computational efficiency compared to traditional planning methods. Additionally, the unified objective uses a value moment matching term for model fitting, which is aligned with the model's usage during policy computation. We present two no-regret algorithms to optimize the proposed objective, and demonstrate their statistical and computational gains compared to existing MBRL methods through simulated benchmarks.


A Simple Midjourney Prompt Generator Using AI Chat

#artificialintelligence

In this article I'm going to share with you a flexible text prompt to feed into an AI Chat application that will generate random Midjourney Prompts for any subject you choose. Prompt Hackers is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. I can be pretty lazy sometimes. There, I said it, but don't tell my wife (although after 35 years, she probably already knows).


Laziness, Barren Plateau, and Noise in Machine Learning

Liu, Junyu, Lin, Zexi, Jiang, Liang

arXiv.org Machine Learning

We define \emph{laziness} to describe a large suppression of variational parameter updates for neural networks, classical or quantum. In the quantum case, the suppression is exponential in the number of qubits for randomized variational quantum circuits. We discuss the difference between laziness and \emph{barren plateau} in quantum machine learning created by quantum physicists in \cite{mcclean2018barren} for the flatness of the loss function landscape during gradient descent. We address a novel theoretical understanding of those two phenomena in light of the theory of neural tangent kernels. For noiseless quantum circuits, without the measurement noise, the loss function landscape is complicated in the overparametrized regime with a large number of trainable variational angles. Instead, around a random starting point in optimization, there are large numbers of local minima that are good enough and could minimize the mean square loss function, where we still have quantum laziness, but we do not have barren plateaus. However, the complicated landscape is not visible within a limited number of iterations, and low precision in quantum control and quantum sensing. Moreover, we look at the effect of noises during optimization by assuming intuitive noise models, and show that variational quantum algorithms are noise-resilient in the overparametrization regime. Our work precisely reformulates the quantum barren plateau statement towards a precision statement and justifies the statement in certain noise models, injects new hope toward near-term variational quantum algorithms, and provides theoretical connections toward classical machine learning. Our paper provides conceptual perspectives about quantum barren plateaus, together with discussions about the gradient descent dynamics in \cite{together}.


Array Functions and the Rule of Least Power – Pursuit of Laziness

#artificialintelligence

Computer Science in the 1960s to 80s spent a lot of effort making languages which were as powerful as possible. Nowadays we have to appreciate the reasons for picking not the most powerful solution but the least powerful. Expressing constraints, relationships and processing instructions in less powerful languages increases the flexibility with which information can be reused: the less powerful the language, the more you can do with the data stored in that language. I chose HTML not to be a programming language because I wanted different programs to do different things with it: present it differently, extract tables of contents, index it, and so on. Though the Rule of Least Power targeted programming languages themselves, rather than language features, I think the same ideas still apply.


Laziness in humans could be used to tell us apart from bots

Daily Mail - Science & tech

Humans' unique laziness when it comes to interacting on social media could be the key to telling us apart from artificially intelligent'bots', a new study shows. US researchers have identified behavioural trends of humans on Twitter that are absent in social media bots – namely a decrease in tweet length over time. The team studied how the behaviour of humans and bots changed over the course of a session on Twitter relating to political events. While humans get lazier as sessions progress and can't be bothered typing out long tweets, bots maintain consistent levels of engagement over time. Such a behavioural difference could inform new machine learning algorithms for bot detection software.