Neural Networks: AI-Alerts
Biden Audio Deepfake Alarms Experts in Lead-Up to Elections
No political deepfake has alarmed the world's disinformation experts more than the doctored audio message of U.S. President Joe Biden that began circulating over the weekend. In the phone message, a voice edited to sound like Biden urged voters in New Hampshire not to cast their ballots in Tuesday's Democratic primary. "Save your vote for the November election," the phone message went. It even made use of one of Biden's signature phrases: "What a bunch of malarkey." In reality, the president isn't on the ballot in the New Hampshire race -- and voting in the primary doesn't preclude people from participating in November's election.
OpenAI bans developer of bot for presidential hopeful Dean Phillips
Dean.Bot was the brainchild of Silicon Valley entrepreneurs Matt Krisiloff and Jed Somers, who had started a super PAC supporting Phillips (Minn.) The PAC had received 1 million from hedge fund manager Bill Ackman, the billionaire activist who led the charge to oust Harvard University president Claudine Gay.
A New Nonprofit Is Seeking to Solve the AI Copyright Problem
Stability AI, the makers of the popular AI image generation model Stable Diffusion, had trained the model by feeding it with millions of images that had been "scraped" from the internet, without the consent of their creators. Newton-Rex, the head of Stability's audio team, disagreed. "Companies worth billions of dollars are, without permission, training generative AI models on creators' works, which are then being used to create new content that in many cases can compete with the original works. In December, the New York Times sued OpenAI in a Manhattan court, alleging that the creator of ChatGPT had illegally used millions of the newspaper's articles to train AI systems that are intended to compete with the Times as a reliable source of information. Meanwhile, in July 2023, comedian Sarah Silverman and other writers sued OpenAI and Meta, accusing the companies of using their writing to train AI models without their permission.
Google DeepMind's new AI system can solve complex geometry problems
Solving mathematics problems requires logical reasoning, something that most current AI models aren't great at. This demand for reasoning is why mathematics serves as an important benchmark to gauge progress in AI intelligence, says Wang. DeepMind's program, named AlphaGeometry, combines a language model with a type of AI called a symbolic engine, which uses symbols and logical rules to make deductions. Language models excel at recognizing patterns and predicting subsequent steps in a process. However, their reasoning lacks the rigor required for mathematical problem-solving. The symbolic engine, on the other hand, is based purely on formal logic and strict rules, which allows it to guide the language model toward rational decisions.
How to Launch a Custom Chatbot on OpenAI's GPT Store
Get ready to share your custom chatbot with the whole world. OpenAI recently launched its GPT Store, after it delayed the project following the chaos of CEO Sam Altman's firing and reinstatement late in 2023. Similar to OpenAI's GPT-4 model and web browsing capabilities, only those who pay 20 a month for ChatGPT Plus can create and use "GPTs." The GPT acronym in ChatGPT actually stands for "generative pretrained transformers," but in this context, the company is using GPT as a term that refers to a unique version of ChatGPT with additional parameters and a little extra training data. Here's how to make your GPT public and some advice to help you get started with the GPT Store.
What is going on with ChatGPT? Arwa Mahdawi
Sick and tired of having to work for a living? ChatGPT feels the same, apparently. Over the last month or so, there's been an uptick in people complaining that the chatbot has become lazy. Sometimes it just straight-up doesn't do the task you've set it. Other times it will stop halfway through whatever it's doing and you'll have to plead with it to keep going.
Congress Wants Tech Companies to Pay Up for AI Training Data
Do AI companies need to pay for the training data that powers their generative AI systems? The question is hotly contested in Silicon Valley and in a wave of lawsuits levied against tech behemoths like Meta, Google, and OpenAI. In Washington, DC, though, there seems to be a growing consensus that the tech giants need to cough up. Today, at a Senate hearing on AI's impact on journalism, lawmakers from both sides of the aisle agreed that OpenAI and others should pay media outlets for using their work in AI projects. "It's not only morally right," said Richard Blumenthal, the Democrat who chairs the Judiciary Subcommittee on Privacy, Technology, and the Law that held the hearing.
Get Ready for the Great AI Disappointment
In the decades to come, 2023 may be remembered as the year of generative AI hype, where ChatGPT became arguably the fastest-spreading new technology in human history and expectations of AI-powered riches became commonplace. The year 2024 will be the time for recalibrating expectations. Of course, generative AI is an impressive technology, and it provides tremendous opportunities for improving productivity in a number of tasks. But because the hype has gone so far ahead of reality, the setbacks of the technology in 2024 will be more memorable. More and more evidence will emerge that generative AI and large language models provide false information and are prone to hallucination--where an AI simply makes stuff up, and gets it wrong.
She helped OpenAI win over world leaders. Can she keep the peace?
Amid the growing clamor in Congress to regulate AI, the company is bringing in reinforcements. After years of outreach to lawmakers, OpenAI in fall 2023 disclosed its first in-house lobbyist, and reported that it is working with global law firm DLA Piper, according to federal disclosures. OpenAI to date has not advocated for or against any specific bill, Makanju says, but she anticipates that will change in 2024, especially with the Schumer effort that is underway. Makanju's team is also growing around the world, with more than 20 people in the United Kingdom, Germany, Japan and Brazil.
In the race for AI supremacy, China and the US are travelling on entirely different tracks Manya Koetse
Of the many events that stand out as noteworthy in online discussions across Chinese social media in 2023, it's perhaps the rise of ChatGPT that will prove to be the most significant. Although the chatbot made by the US-based OpenAI was officially launched in late 2022, it took until 2023 for its unprecedented growth to raise eyebrows in China, where the government has set the goal of becoming the global AI leader by 2030. Over the past decade, the focus on AI in Chinese society and digital culture has grown. Since the Covid-19 outbreak, AI implementations in schools, office buildings and factories have rolled out in fast forward. AI facial recognition is employed in everything from public security to payment technology; smart glasses and helmets make it easier for many workers to perform their tasks; and intelligent robots have become a common sight in China's service industry, in malls, restaurants, and banks. There seemed little doubt over who would win the tech race between the eagle and the dragon; but then came ChatGPT.