help
Help! My therapist is secretly using ChatGPT
Help! My therapist is secretly using ChatGPT Some patients have discovered their private confessions are being quietly fed into AI. In Silicon Valley's imagined future, AI models are so empathetic that we'll use them as therapists. They'll provide mental-health care for millions, unimpeded by the pesky requirements for human counselors, like the need for graduate degrees, malpractice insurance, and sleep. Down here on Earth, something very different has been happening. Last week, we published a story about people finding out that their therapists were secretly using ChatGPT during sessions. In some cases it wasn't subtle; one therapist accidentally shared his screen during a virtual appointment, allowing the patient to see his own private thoughts being typed into ChatGPT in real time.
- North America > United States > California (0.25)
- North America > United States > Nevada (0.05)
- North America > United States > Massachusetts (0.05)
- North America > United States > Illinois (0.05)
- Information Technology > Hardware (0.50)
- Semiconductors & Electronics (0.48)
- North America > United States > Massachusetts > Plymouth County > Norwell (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
Predicting What You Already Know Helps: Provable Self-Supervised Learning
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks), that do not require labeled data, to learn semantic representations. These pretext tasks are created solely using the input features, such as predicting a missing image patch, recovering the color channels of an image from context, or predicting missing words, yet predicting this \textit{known} information helps in learning representations effective for downstream prediction tasks. This paper posits a mechanism based on approximate conditional independence to formalize how solving certain pretext tasks can learn representations that provably decrease the sample complexity of downstream supervised tasks. Formally, we quantify how the approximate independence between the components of the pretext task (conditional on the label and latent variables) allows us to learn representations that can solve the downstream task with drastically reduced sample complexity by just training a linear layer on top of the learned representation.
Before Going to Tokyo, I Tried Learning Japanese With ChatGPT
On the final day of my visit to Japan, I'm alone and floating in some skyscraper's rooftop hot springs, praying no one joins me. For the last few months, I've been using ChatGPT's Advanced Voice Mode as an AI language tutor, part of a test to judge generative AI's potential as both a learning tool and a travel companion. The excessive talking to both strangers and a chatbot on my phone was illuminating as well as exhausting. I'm ready to shut my yapper for a minute and enjoy the silence. When OpenAI launched ChatGPT late in 2022, it set off a firestorm of generative AI competition and public interest.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.82)
I Asked AI Chatbots to Help Me Shop. They All Failed
Like people in many fields, we here on the WIRED Gear desk are mildly concerned that ChatGPT is coming for our jobs. But we feel relatively safe because it's our job to test things, and AI can't really do that. A large language model can't pedal an ebike. A chatbot can't see the curves of a Dynamic Island. A cloud service can't tell you whether a grill cooked a burger evenly.
We Put Google's New AI Writing Assistant to the Test
When I asked Google's AI writing aid to draft a happy birthday email to a friend, it left my brain in the dust. I had taken about 90 seconds to craft a decent 81-word greeting. But the search giant's text-generation feature knocked out a flawless 87 words in a third of the time. That's exactly what Google wants to see. The Help Me Write feature that launched in March and was rolled out more broadly at the company's annual conference last week is a radical step beyond the Smart Reply and Smart Compose tools that Gmail has offered for years to generate short phrases.
Help! My Political Beliefs Were Altered by a Chatbot!
When we ask ChatGPT or another bot to draft a memo, email, or presentation, we think these artificial-intelligence assistants are doing our bidding. A growing body of research shows that they also can change our thinking--without our knowing. One of the latest studies in this vein, from researchers spread across the globe, found that when subjects were asked to use an AI to help them write an essay, that AI could nudge them to write an essay either for or against a particular view, depending on the bias of the algorithm. Performing this exercise also measurably influenced the subjects' opinions on the topic, after the exercise.
Google's most popular apps are gaining AI superpowers
Today, at the Google I/O developer conference, Google chief executive Sundar Pichai pledged to use AI responsibly, by improving knowledge and learning, boosting creativity, and deploying AI responsibly to help maintain equality. Gmail has been a pioneer in predictive responses, and it's continuing with a new feature called Help Me Write. Help Me Write uses your previous history to create an email on a given topic, so you can ask for a refund on a travel reservation, using the previous email history to help negotiate a settlement. All you need to ask Help Me Write to craft the email. This feature will start rolling out as part of the company's upcoming Workspace update, Pichai said.
Joe Biden Wants Hackers' Help to Keep AI Chatbots In Check
ChatGPT has stoked new hopes about the potential of artificial intelligence--but also new fears. Today the White House joined the chorus of concern, announcing it will support a mass hacking exercise at the Defcon security conference this summer to probe generative AI systems from companies including Google. The White House Office of Science and Technology Policy also said that $140 million will be diverted towards launching seven new National AI Research Institutes focused on developing ethical, transformative AI for the public good, bringing the total number to 25 nationwide. The announcement came hours before a meeting on the opportunities and risks presented by AI between vice president Kamala Harris and executives from Google and Microsoft as well as the startups Anthropic and OpenAI, which created ChatGPT. The White House AI intervention comes as appetite for regulating the technology is growing around the world, fueled by the hype and investment sparked by ChatGPT.
- Europe (0.17)
- Asia > China (0.17)
- North America > United States > District of Columbia > Washington (0.06)