Natural Language: AI-Alerts
Two-faced AI language models learn to hide deception
Researchers worry that bad actors could engineer open-source LLMs to make them respond to subtle cues in a harmful way.Credit: Smail Aslanda/Anadolu Just like people, artificial-intelligence (AI) systems can be deliberately deceptive. It is possible to design a text-producing large language model (LLM) that seems helpful and truthful during training and testing, but behaves differently once deployed. And according to a study shared this month on arXiv1, attempts to detect and remove such two-faced behaviour are often useless -- and can even make the models better at hiding their true nature. The finding that trying to retrain deceptive LLMs can make the situation worse "was something that was particularly surprising to us โฆ and potentially scary", says co-author Evan Hubinger, a computer scientist at Anthropic, an AI start-up company in San Francisco, California. Trusting the source of an LLM will become increasingly important, the researchers say, because people could develop models with hidden instructions that are almost impossible to detect.
OpenAI bans developer of bot for presidential hopeful Dean Phillips
Dean.Bot was the brainchild of Silicon Valley entrepreneurs Matt Krisiloff and Jed Somers, who had started a super PAC supporting Phillips (Minn.) The PAC had received 1 million from hedge fund manager Bill Ackman, the billionaire activist who led the charge to oust Harvard University president Claudine Gay.
A New Nonprofit Is Seeking to Solve the AI Copyright Problem
Stability AI, the makers of the popular AI image generation model Stable Diffusion, had trained the model by feeding it with millions of images that had been "scraped" from the internet, without the consent of their creators. Newton-Rex, the head of Stability's audio team, disagreed. "Companies worth billions of dollars are, without permission, training generative AI models on creators' works, which are then being used to create new content that in many cases can compete with the original works. In December, the New York Times sued OpenAI in a Manhattan court, alleging that the creator of ChatGPT had illegally used millions of the newspaper's articles to train AI systems that are intended to compete with the Times as a reliable source of information. Meanwhile, in July 2023, comedian Sarah Silverman and other writers sued OpenAI and Meta, accusing the companies of using their writing to train AI models without their permission.
Don't Talk to People Like They're Chatbots
For most of history, communicating with a computer has not been like communicating with a person. In their earliest years, computers required carefully constructed instructions, delivered through punch cards; then came a command-line interface, followed by menus and options and text boxes. If you wanted results, you needed to learn the computer's language. This is beginning to change. Large language models--the technology undergirding modern chatbots--allow users to interact with computers through natural conversation, an innovation that introduces some baggage from human-to-human exchanges.
Google DeepMind's new AI system can solve complex geometry problems
Solving mathematics problems requires logical reasoning, something that most current AI models aren't great at. This demand for reasoning is why mathematics serves as an important benchmark to gauge progress in AI intelligence, says Wang. DeepMind's program, named AlphaGeometry, combines a language model with a type of AI called a symbolic engine, which uses symbols and logical rules to make deductions. Language models excel at recognizing patterns and predicting subsequent steps in a process. However, their reasoning lacks the rigor required for mathematical problem-solving. The symbolic engine, on the other hand, is based purely on formal logic and strict rules, which allows it to guide the language model toward rational decisions.
How to Launch a Custom Chatbot on OpenAI's GPT Store
Get ready to share your custom chatbot with the whole world. OpenAI recently launched its GPT Store, after it delayed the project following the chaos of CEO Sam Altman's firing and reinstatement late in 2023. Similar to OpenAI's GPT-4 model and web browsing capabilities, only those who pay 20 a month for ChatGPT Plus can create and use "GPTs." The GPT acronym in ChatGPT actually stands for "generative pretrained transformers," but in this context, the company is using GPT as a term that refers to a unique version of ChatGPT with additional parameters and a little extra training data. Here's how to make your GPT public and some advice to help you get started with the GPT Store.
AI girlfriends are here โ but there's a dark side to virtual companions Arwa Mahdawi
It is a truth universally acknowledged, that a single man in possession of a computer must be in want of an AI girlfriend. Certainly a lot of enterprising individuals seem to think there's a lucrative market for digital romance. OpenAI recently launched its GPT Store, where paid ChatGPT users can buy and sell customized chatbots (think Apple's app store, but for chatbots) โ and the offerings include a large selection of digital girlfriends. "AI girlfriend bots are already flooding OpenAI's GPT store," a headline from Quartz, who first reported on the issue, blared on Thursday. Quartz went on to note that "the AI girlfriend bots go against OpenAI's usage policy โฆ The company bans GPTs'dedicated to fostering romantic companionship or performing regulated activities'."
What is going on with ChatGPT? Arwa Mahdawi
Sick and tired of having to work for a living? ChatGPT feels the same, apparently. Over the last month or so, there's been an uptick in people complaining that the chatbot has become lazy. Sometimes it just straight-up doesn't do the task you've set it. Other times it will stop halfway through whatever it's doing and you'll have to plead with it to keep going.
AI and Education: Will Chatbots Soon Tutor Your Children?
Mr. Khan's vision of tutoring bots tapped into a decades-old Silicon Valley dream: automated teaching platforms that instantly customize lessons for each student. Proponents argue that developing such systems would help close achievement gaps in schools by delivering relevant, individualized instruction to children faster and more efficiently than human teachers ever could. In pursuit of such ideals, tech companies and philanthropists over the years have urged schools to purchase a laptop for each child, championed video tutorial platforms and financed learning apps that customize students' lessons. Some online math and literacy interventions have reported positive effects. But many education technology efforts have not proved to significantly close academic achievement gaps or improve student results like high school graduation rates.
Congress Wants Tech Companies to Pay Up for AI Training Data
Do AI companies need to pay for the training data that powers their generative AI systems? The question is hotly contested in Silicon Valley and in a wave of lawsuits levied against tech behemoths like Meta, Google, and OpenAI. In Washington, DC, though, there seems to be a growing consensus that the tech giants need to cough up. Today, at a Senate hearing on AI's impact on journalism, lawmakers from both sides of the aisle agreed that OpenAI and others should pay media outlets for using their work in AI projects. "It's not only morally right," said Richard Blumenthal, the Democrat who chairs the Judiciary Subcommittee on Privacy, Technology, and the Law that held the hearing.