Goto

Collaborating Authors

 study participant


How to Capture and Study Conversations Between Research Participants and ChatGPT: GPT for Researchers (g4r.org)

Kim, Jin

arXiv.org Artificial Intelligence

As large language models (LLMs) like ChatGPT become increasingly integrated into our everyday lives--from customer service and education to creative work and personal productivity--understanding how people interact with these AI systems has become a pressing issue. Despite the widespread use of LLMs, researchers lack standardized tools for systematically studying people's interactions with LLMs. To address this issue, we introduce GPT for Researchers (G4R), or g4r.org, a free website that researchers can use to easily create and integrate a GPT Interface into their studies. At g4r.org, researchers can (1) enable their study participants to interact with GPT (such as ChatGPT), (2) customize GPT Interfaces to guide participants' interactions with GPT (e.g., set constraints on topics or adjust GPT's tone or response style), and (3) capture participants' interactions with GPT by downloading data on messages exchanged between participants and GPT. By facilitating study participants' interactions with GPT and providing detailed data on these interactions, G4R can support research on topics such as consumer interactions with AI agents or LLMs, AI-assisted decision-making, and linguistic patterns in human-AI communication. With this goal in mind, we provide a step-by-step guide to using G4R at g4r.org.


OpenAI has released its first research into how using ChatGPT affects people's emotional wellbeing

MIT Technology Review

The researchers found some intriguing differences between how men and women respond to using ChatGPT. After using the chatbot for four weeks, female study participants were slightly less likely to socialize with people than their male counterparts who did the same. Meanwhile, participants who interacted with ChatGPT's voice mode in a gender that was not their own for their interactions reported significantly higher levels of loneliness and more emotional dependency on the chatbot at the end of the experiment. OpenAI plans to submit both studies to peer-reviewed journals. Chatbots powered by large language models are still a nascent technology, and it's difficult to study how they affect us emotionally.


Potential breakthrough as scientists claim two people communicated in their DREAMS in world first

Daily Mail - Science & tech

Scientists have brought science fiction one step closer to reality by achieving the first two-way communication between individuals during lucid dreaming. In an experiment that sounds like a scene out of the movie'Inception,' REMspace - a California-based startup that designs technology to enhance sleep and lucid dreaming - reportedly exchanged a message between two people who were asleep. The company used'specially designed equipment' which included a'server,' an'apparatus,' 'Wifi' and'sensors,' but did not specify the exact technology they used. The study participants were asleep in separate homes when REMspace researchers beamed a word created through a unique language between them. REMspace CEO and founder Michael Raduga said: 'Yesterday, communicating in dreams seemed like science fiction.


Resolving the Human-Subjects Status of ML's Crowdworkers

Communications of the ACM

As the focus of machine learning (ML) has shifted toward settings characterized by massive datasets, researchers have become reliant on crowdsourcing platforms.13,25 Just for the natural language processing (NLP) task of passage-based question answering (QA), more than 15 new datasets containing at least 50k annotations have been introduced since 2016. Prior to that, available QA datasets contained orders of magnitude fewer examples. The ability to construct such enormous resources derives mostly from the liquid market for temporary labor on crowdsourcing platforms such as Amazon Mechanical Turk. These practices, however, have raised ethical concerns, including low wages;5,26 disparate access, benefits, and harms of developed applications;1,20 reproducibility of proposed methods;4,21 and potential for unfairness and discrimination in the resulting technologies.9,14


The neuroscience of the sports fanatic: MRI scans peek inside the minds of soccer fans - revealing where winning and losing lives in the brain

Daily Mail - Science & tech

Sports fans know that watching their team win releases a feeling of joy, but seeing them lose has the opposite effect - and these'feelings' can be seen in our brains. Researchers at the Clínica Alemana de Santiago in Chile scanned soccer fans' brains, finding the sight of their team scoring lit up the region associated with reward. When their team lost, a network of brain areas involved in mentalization became more active - signaling that they were trying to make sense of what just happened. In other words, we feel good when we watch our team score. And when we see our team's rivals score on them, we attempt to rationalize.


Want to look your best? Say cheese! Smiling makes you MORE attractive, study finds

Daily Mail - Science & tech

Many female celebrities, like Victoria Beckham, are rarely caught on camera smiling, perhaps for fear of showing age-revealing laughter lines. But smiling for the camera could actually make you look more attractive, a study suggests. Researchers recruited 112 volunteers and presented them each with 80 pictures of people who either had a neutral expression or were slightly smiling. Asked to rate the faces for attractiveness, they gave higher scores to people who were smiling. Asked how they judged attractiveness, more of the study participants took into account someone's facial expression and whether they looked friendly than their clothing, hairstyle and level of grooming.


Walking fingerprinting

Koffman, Lily, Crainiceanu, Ciprian, Leroux, Andrew

arXiv.org Machine Learning

We consider the problem of predicting an individual's identity from accelerometry data collected during walking. In a previous paper we introduced an approach that transforms the accelerometry time series into an image by constructing its complete empirical autocorrelation distribution. Predictors derived by partitioning this image into grid cells were used in logistic regression to predict individuals. Here we: (1) implement machine learning methods for prediction using the grid cell-derived predictors; (2) derive inferential methods to screen for the most predictive grid cells; and (3) develop a novel multivariate functional regression model that avoids partitioning of the predictor space into cells. Prediction methods are compared on two open source data sets: (1) accelerometry data collected from $32$ individuals walking on a $1.06$ kilometer path; and (2) accelerometry data collected from six repetitions of walking on a $20$ meter path on two separate occasions at least one week apart for $153$ study participants. In the $32$-individual study, all methods achieve at least $95$% rank-1 accuracy, while in the $153$-individual study, accuracy varies from $41$% to $98$%, depending on the method and prediction task. Methods provide insights into why some individuals are easier to predict than others.


AI tools being used by police who 'do not understand how these technologies work': Study

FOX News

Fox News correspondent Grady Trimble has the latest on fears the technology will spiral out of control on'Special Report.' Artificial intelligence is already revolutionizing law enforcement, which has implemented advanced technology in their investigations, but "society has a moral obligation to mitigate the detrimental consequences," a recent study says. AI is in its teenage years, as some experts have said, but law enforcement agencies are already integrating predictive policing, facial recognition and technologies designed to detect gunshots into their investigations, according to a North Carolina State University report published in February. The report was based on 20 semi-structured interviews of law enforcement professionals in North Carolina, and how AI impacts the relationships between communities and police jurisdictions. "We found that study participants were not familiar with AI, or with the limitations of AI technologies," said Jim Brunet, a co-author of the study and director of NC State's Public Safety Leadership Initiative.


A Matter of Annotation: An Empirical Study on In Situ and Self-Recall Activity Annotations from Wearable Sensors

Hoelzemann, Alexander, Van Laerhoven, Kristof

arXiv.org Artificial Intelligence

Research into the detection of human activities from wearable sensors is a highly active field, benefiting numerous applications, from ambulatory monitoring of healthcare patients via fitness coaching to streamlining manual work processes. We present an empirical study that compares 4 different commonly used annotation methods utilized in user studies that focus on in-the-wild data. These methods can be grouped in user-driven, in situ annotations - which are performed before or during the activity is recorded - and recall methods - where participants annotate their data in hindsight at the end of the day. Our study illustrates that different labeling methodologies directly impact the annotations' quality, as well as the capabilities of a deep learning classifier trained with the data respectively. We noticed that in situ methods produce less but more precise labels than recall methods. Furthermore, we combined an activity diary with a visualization tool that enables the participant to inspect and label their activity data. Due to the introduction of such a tool were able to decrease missing annotations and increase the annotation consistency, and therefore the F1-score of the deep learning model by up to 8% (ranging between 82.1 and 90.4% F1-score). Furthermore, we discuss the advantages and disadvantages of the methods compared in our study, the biases they may could introduce and the consequences of their usage on human activity recognition studies and as well as possible solutions.


Watch the moment a computer reads a patient's MIND

Daily Mail - Science & tech

It's probably a good idea to keep your opinions to yourself if your friend gets a terrible new haircut - but soon you might not get a choice. That's because scientists at the University of Texas at Austin have trained an artificial intelligence (AI) to read a person's mind and turn their innermost thoughts into text. Three study participants listened to stories while lying in an MRI machine, while an AI'decoder' analysed their brain activity. They were then asked to read a different story or make up their own, and the decoder could then turn the MRI data into text in real time. The breakthrough raises concerns about'mental privacy' as it could be the first step in being able to eavesdrop on others' thoughts.