Goto

Collaborating Authors

 delve


The Strange Ways Writers Are Proving That Their Writing Isn't ChatGPT

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. The other week, I was reading an email I'd written when a strange notion occurred to me. Would it perhaps be better, an unsettling new voice suddenly whispered, to leave it in? This is a thought that would've appalled me a year ago. As a professional writer, I have long prided myself on impeccable grammar, judiciously wielded punctuation, and (at times indulgent) verbosity.


Word Overuse and Alignment in Large Language Models: The Influence of Learning from Human Feedback

Juzek, Tom S., Ward, Zina B.

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are known to overuse certain terms like "delve" and "intricate." The exact reasons for these lexical choices, however, have been unclear. Using Meta's Llama model, this study investigates the contribution of Learning from Human Feedback (LHF), under which we subsume Reinforcement Learning from Human Feedback and Direct Preference Optimization. We present a straightforward procedure for detecting the lexical preferences of LLMs that are potentially LHF-induced. Next, we more conclusively link LHF to lexical overuse by experimentally emulating the LHF procedure and demonstrating that participants systematically prefer text variants that include certain words. This lexical overuse can be seen as a sort of misalignment, though our study highlights the potential divergence between the lexical expectations of different populations -- namely LHF workers versus LLM users. Our work contributes to the growing body of research on explainable artificial intelligence and emphasizes the importance of both data and procedural transparency in alignment research.


Why Does ChatGPT "Delve" So Much? Exploring the Sources of Lexical Overrepresentation in Large Language Models

Juzek, Tom S., Ward, Zina B.

arXiv.org Artificial Intelligence

Scientific English is currently undergoing rapid change, with words like "delve," "intricate," and "underscore" appearing far more frequently than just a few years ago. It is widely assumed that scientists' use of large language models (LLMs) is responsible for such trends. We develop a formal, transferable method to characterize these linguistic changes. Application of our method yields 21 focal words whose increased occurrence in scientific abstracts is likely the result of LLM usage. We then pose "the puzzle of lexical overrepresentation": WHY are such words overused by LLMs? We fail to find evidence that lexical overrepresentation is caused by model architecture, algorithm choices, or training data. To assess whether reinforcement learning from human feedback (RLHF) contributes to the overuse of focal words, we undertake comparative model testing and conduct an exploratory online study. While the model testing is consistent with RLHF playing a role, our experimental results suggest that participants may be reacting differently to "delve" than to other focal words. With LLMs quickly becoming a driver of global language change, investigating these potential sources of lexical overrepresentation is important. We note that while insights into the workings of LLMs are within reach, a lack of transparency surrounding model development remains an obstacle to such research.


ChatGPT is changing the way we write. Here's how – and why it's a problem

AIHub

Have you noticed certain words and phrases popping up everywhere lately? Phrases such as "delve into" and "navigate the landscape" seem to feature in everything from social media posts to news articles and academic publications. They may sound fancy, but their overuse can make a text feel monotonous and repetitive. This trend may be linked to the increasing use of generative artificial intelligence (AI) tools such as ChatGPT and other large language models (LLMs). These tools are designed to make writing easier by offering suggestions based on patterns in the text they were trained on.


World of Warcraft: The War Within review – a reason to dive back into the depths of Azeroth

The Guardian

World of Warcraft has an enduring identity problem. What was once one of the biggest games in the world is now approaching its 20th birthday, and with every year that goes by, developer Blizzard has the unenviable challenge of trying to prove that WoW still has a place in today's gaming world. This goes some way to explaining the many times that Blizzard has tried to reinvent WoW. Six years after its initial release, the developer attempted a radical do-over of the game's world in 2010's Cataclysm expansion, in which an ancient dragon ravaged and reshaped the realm of Azeroth (an experience you can relive through the recently relaunched Cataclysm Classic). Since then, Blizzard has experimented with numerous gimmicks to try to keep WoW current, including a now much-maligned mechanic that saw players building their power level for two years, only to lose that power at the end of every expansion cycle.


TechScape: How cheap, outsourced labour in Africa is shaping AI English

The Guardian

We're witnessing the birth of AI-ese, and it's not what anyone could have guessed. If you've spent enough time using AI assistants, you'll have noticed a certain quality to the responses generated. Without a concerted effort to break the systems out of their default register, the text they spit out is, while grammatically and semantically sound, ineffably generated. Some of the tells are obvious. The fawning obsequiousness of a wild language model hammered into line through reinforcement learning with human feedback marks chatbots out. Which is the right outcome: eagerness to please and general optimism are good traits to have in anyone (or anything) working as an assistant.


Bundling and Tumbling in Bacterial-inspired Bi-flagellated Soft Robots for Attitude Adjustment

Hao, Zhuonan, Zalavadia, Siddharth, Jawed, Mohammad Khalid

arXiv.org Artificial Intelligence

We create a mechanism inspired by bacterial swimmers, featuring two flexible flagella with individual control over rotation speed and direction in viscous fluid environments. Using readily available materials, we design and fabricate silicone-based helical flagella. To simulate the robot's motion, we develop a physics-based computational tool, drawing inspiration from computer graphics. The framework incorporates the Discrete Elastic Rod method, modeling the flagella as Kirchhoff's elastic rods, and couples it with the Regularized Stokeslet Segments method for hydrodynamics, along with the Implicit Contact Model to handle contact. This approach effectively captures polymorphic phenomena like bundling and tumbling. Our study reveals how these emergent behaviors affect the robot's attitude angles, demonstrating its ability to self-reorient in both simulations and experiments. We anticipate that this framework will enhance our understanding of the directional change capabilities of flagellated robots, potentially stimulating further exploration on microscopic robot mobility.


#IROS2023: A glimpse into the next generation of robotics

Robohub

The 2023 EEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) kicks off today at the Huntington Place in Detroit, Michigan. This year's theme, "The Next Generation of Robotics," is a call to the young and senior researchers to create a forum where the past, present, and future of robotics converge. The program of IROS 2023 is a blend of theoretical insights and practical demonstrations, designed to foster a culture of innovation and collaboration. Among the highlights are the plenary and keynote talks by eminent personalities in the field of robotics. On the plenary front, Marcie O'Malley from Rice University will delve into the realm of robots that teach and learn with a human touch.


Towards a Holistic Approach: Understanding Sociodemographic Biases in NLP Models using an Interdisciplinary Lens

Venkit, Pranav Narayanan

arXiv.org Artificial Intelligence

The rapid growth in the usage and applications of Natural Language Processing (NLP) in various sociotechnical solutions has highlighted the need for a comprehensive understanding of bias and its impact on society. While research on bias in NLP has expanded, several challenges persist that require attention. These include the limited focus on sociodemographic biases beyond race and gender, the narrow scope of analysis predominantly centered on models, and the technocentric implementation approaches. This paper addresses these challenges and advocates for a more interdisciplinary approach to understanding bias in NLP. The work is structured into three facets, each exploring a specific aspect of bias in NLP.


Say what! These genius voice commands will change your life on your iPhone or Android

Daily Mail - Science & tech

Siri, Alexa and Google Assistant are powerful - but the sad reality is most people aren't maximizing their true potential. The average user is not aware that their pocket AI assistant can scan thousands of photos instantly and find old images with a few simple commands. And these bots also make incredible PAs, bookmarking dates in your calendar and setting reminders for crucial meetings. So, let's delve into the top voice commands for your smart assistant that can simplify your daily life: With those details settled, let's delve into the top voice commands for your smart assistant that can simplify your daily life (stock image) Before we get started, to make your phone's virtual assistant work better, be sure it truly understands your voice, tone and inflections. These common mistakes make it more complicated. In dimly lit situations, such as trying to decipher a menu or navigating a hallway, the last thing you want is to struggle with your phone to locate the flashlight feature.