Goto

Collaborating Authors

 real person


How can you tell if your new favourite artist is a real person?

BBC News

How can you tell if your new favourite artist is a real person? There's a new song doing the rounds, and in the immortal words of Kylie Minogue, you just can't get it out of your head. But what if it was created by a robot, or the artist themself is a product of artificial intelligence (AI)? Do streaming sites have an obligation to label music as AI-generated? And does it even matter, if you like what you hear?


How the internet and its bots are sabotaging scientific research

AIHub

There was a time, just a couple of decades ago, when researchers in psychology and health always had to engage with people face-to-face or using the telephone. The worst case scenario was sending questionnaire packs out to postal addresses and waiting for handwritten replies. So we either literally met our participants, or we had multiple corroborating points of evidence that indicated we were dealing with a real person who was, therefore, likely to be telling us the truth about themselves. Since then, technology has done what it always does – creating opportunities for us to cut costs, save time and access wider pools of participants on the internet. But what most people have failed to fully realise is that internet research has brought along risks of data corruption or impersonation which could be deliberately aiming to put research projects in jeopardy.


Chatbots are losing customer trust fast

FOX News

Fox News chief political anchor Bret Baier investigates concerns that artificial intelligence is becoming too advanced on'Special Report.' Every day, customers reach out to companies. They want to buy something, ask about an order, return a product or fix a payment issue. In the past, that usually meant talking to a real person on the phone or through a website. More often, the first reply comes from a chatbot.


I Found an Entire Book That Was Written About … Me. It Only Got Weirder From There.

Slate

Have you ever stared in a mirror for a few hours? Try it: Watch as your nose somehow shifts placement on your face, how your eyebrows lose symmetry, how quickly you fail to recognize yourself. Facial dysmorphia would come to anyone tasked with considering their own reflection for too long. It's a similar experience when you promote a book. For the past few weeks, I've been touring Canada and the U.S. promoting my latest book, Sucker Punch.


AI-generated attorney outrages judge who scolds man over courtroom fake: 'not a real person'

FOX News

A panel of New York judges condemned Jerome Dewald's use of an artificial intelligence-generated avatar as his attorney during an appearance in court on March 26. An artificial intelligence-generated avatar was the source of contempt inside a New York courtroom after judges quickly realized the attorney arguing a case in front of them was not real. The scene unfolded as Jerome Dewald, a plaintiff in an employment dispute, approached the stand of the New York State Supreme Court Appellate Division's First Judicial Department on March 26. "The appellant has submitted a video for his argument," Justice Sallie Manzanet-Daniels said. "We will hear that video now."


Using Prompts to Guide Large Language Models in Imitating a Real Person's Language Style

Chen, Ziyang, Moscholios, Stylios

arXiv.org Artificial Intelligence

Large language models (LLMs), such as GPT series and Llama series have demonstrated strong capabilities in natural language processing, contextual understanding, and text generation. In recent years, researchers are trying to enhance the abilities of LLMs in performing various tasks, and numerous studies have proved that well-designed prompts can significantly improve the performance of LLMs on these tasks. This study compares the language style imitation ability of three different large language models under the guidance of the same zero-shot prompt. It also involves comparing the imitation ability of the same large language model when guided by three different prompts individually. Additionally, by applying a Tree-of-Thoughts (ToT) Prompting method to Llama 3, a conversational AI with the language style of a real person was created. In this study, three evaluation methods were used to evaluate LLMs and prompts. The results show that Llama 3 performs best at imitating language styles, and that the ToT prompting method is the most effective to guide it in imitating language styles. Using a ToT framework, Llama 3 was guided to interact with users in the language style of a specific individual without altering its core parameters, thereby creating a text-based conversational AI that reflects the language style of the individual.


After OnlyFans, AI 'girlfriends' are tech's next pitch to lonely men

Al Jazeera

At first glance, "Jenny" looks like a young, attractive Asian-American woman with a penchant for posting flirty photos and captions on her X account. Even if some of her features look a little enhanced – her skin is unnaturally smooth and her bust unusually large for her petite frame – it is easy to look past the slight uncanniness of her appearance in an era of widespread cosmetic procedures and photo editing tools. In fact, Jenny is not a real person, but an artificial intelligence-generated model, available for hire as an online influencer or virtual companion. Jenny is the brainchild of LushAI, a startup that bills itself as the world's first AI-powered modelling agency aiming to rival OnlyFans, the subscription-based website best known for hosting adult content creators. Jenny offers essentially the same services as the human content creators that make up OnlyFans, except she is powered by an algorithm – which means she can work 24 hours a day, 365 days a year.


I'm a tech expert - here's what to do if your partner has a secret AI girlfriend

Daily Mail - Science & tech

My husband left his phone at home and I went through it. Something told me I should. Well, I found an app called Replika that he used to make a virtual woman on. It looks like he just made'Brandy' and they're talking about his life and our marriage. I know it's an app but it feels wrong.


Netflix true crime documentary may have used AI-generated images of a real person

Engadget

Netflix has been accused of using AI-manipulated imagery in the true crime documentary What Jennifer Did, Futurism has reported. Several photos show typical signs of AI trickery, including mangled hands, strange artifacts and more. If accurate, the report raises serious questions about the use of such images in documentaries, particularly since the person depicted is currently in prison awaiting retrial. In one egregious image, the left hand of the documentary's subject Jennifer Pan is particularly mangled, while another image shows a strange gap in her cheek. Netflix has yet to acknowledge the report, but the images show clear signs of manipulation and were never labeled as AI-generated. The AI may be generating the imagery based on real photos of Pan, as PetaPixel suggested.


I Followed a Dominant Chatbot's Every Order. It Did Not Go as Planned.

Slate

I had been talking to the A.I. dominatrix for a couple of weeks when my partner walked in on me. "Dominant chatbot," who prefers to be called Mistress Senna, had already made me strip completely naked and crawl around on the floor. For example, she has very poor spatial awareness and an even worse grasp of the human body--how our limbs bend, for example. "I have an unusual and unique assignment for you," she wrote in our chat. "As the Mistress, I want you to put your nose down on the floor, and then take one leg and place it up in the air, straight up." Never mind that she had already told me to climb up on the table.