Plotting

 TIME - Tech


A New Nonprofit Is Seeking to Solve the AI Copyright Problem

TIME - Tech

Stability AI, the makers of the popular AI image generation model Stable Diffusion, had trained the model by feeding it with millions of images that had been "scraped" from the internet, without the consent of their creators. Newton-Rex, the head of Stability's audio team, disagreed. "Companies worth billions of dollars are, without permission, training generative AI models on creators' works, which are then being used to create new content that in many cases can compete with the original works. In December, the New York Times sued OpenAI in a Manhattan court, alleging that the creator of ChatGPT had illegally used millions of the newspaper's articles to train AI systems that are intended to compete with the Times as a reliable source of information. Meanwhile, in July 2023, comedian Sarah Silverman and other writers sued OpenAI and Meta, accusing the companies of using their writing to train AI models without their permission.


OpenAI Working With U.S. Military on Cybersecurity Tools

TIME - Tech

OpenAI is working with the Pentagon on a number of projects including cybersecurity capabilities, a departure from the startup's earlier ban on providing its artificial intelligence to militaries. The ChatGPT maker is developing tools with the U.S. Defense Department on open-source cybersecurity software -- collaborating with DARPA for its AI Cyber Challenge announced last year -- and has had initial talks with the US government about methods to assist with preventing veteran suicide, Anna Makanju, the company's vice president of global affairs, said in an interview at Bloomberg House at the World Economic Forum in Davos on Tuesday. The company had recently removed language in its terms of service banning its AI from "military and warfare" applications. Makanju described the decision as part of a broader update of its policies to adjust to new uses of ChatGPT and its other tools. "Because we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world," she said.


How to Navigate an Era of Disruption, Disinformation, and Division

TIME - Tech

Recent years have heralded a particularly disruptive period in human history. Against the backdrop of a warming planet and the spillover effects of the COVID-19 pandemic, we face some of the most challenging economic and geopolitical conditions in decades. And things may only deteriorate from here. These challenges are detailed at length in the World Economic Forum's Global Risks Report 2024, released this week. The report, based on the views of nearly 1,500 global risks experts, policy-makers, and industry leaders, finds that the world's top three risks over the next two years are false information, extreme weather, and societal polarization.


Experts Warn Congress of Dangers AI Poses to Journalism

TIME - Tech

AI poses a grave threat to journalism, experts warned Congress at a hearing on Wednesday. Media executives and academic experts testified before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law about how AI is contributing to the big tech-fueled decline of journalism. They also talked about intellectual property issues arising from AI models being trained on the work of journalists, and raised alarms about the increasing dangers of AI-powered misinformation. "The rise of big tech has been directly responsible for the decline in local news," said Senator Richard Blumenthal, a Connecticut Democrat and chair of the subcommittee. "First, Meta, Google and OpenAI are using the hard work of newspapers and authors to train their AI models without compensation or credit. Adding insult to injury, those models are then used to compete with newspapers and broadcasters, cannibalizing readership and revenue from the journalistic institutions that generate the content in the first place."


AI Misinformation Is World's Biggest Short-Term Threat, WEF Report Warns

TIME - Tech

False and misleading information supercharged with cutting-edge artificial intelligence that threatens to erode democracy and polarize society is the top immediate risk to the global economy, the World Economic Forum said in a report Wednesday. In its latest Global Risks Report, the organization also said an array of environmental risks pose the biggest threats in the longer term. The report was released ahead of the annual elite gathering of CEOs and world leaders in the Swiss ski resort town of Davos and is based on a survey of nearly 1,500 experts, industry leaders and policymakers. The report listed misinformation and disinformation as the most severe risk over the next two years, highlighting how rapid advances in technology also are creating new problems or making existing ones worse. The authors worry that the boom in generative AI chatbots like ChatGPT means that creating sophisticated synthetic content that can be used to manipulate groups of people won't be limited any longer to those with specialized skills. AI is set to be a hot topic next week at the Davos meetings, which are expected to be attended by tech company bosses including OpenAI CEO Sam Altman, Microsoft CEO Satya Nadella and AI industry players like Meta's chief AI scientist, Yann LeCun.


Startups Are Racing to Create the iPhone of AI

TIME - Tech

The competition to build the iPhone of artificial intelligence is heating up. On Tuesday, the technology startup Rabbit unveiled its contender: a small, orange, walkie-talkie style device that, according to the company, can use "AI agents" to carry out tasks on behalf of the user. In a pre-recorded keynote address shown at the Consumer Electronics Show in Las Vegas, Rabbit's founder Jesse Lyu asks the device to plan him a vacation to London; the keynote shows the device designing him an itinerary and booking his trip. He orders a pizza, books an Uber, and teaches the device how to generate an image using Midjourney. The gadget, called the Rabbit r1, is just the latest in an increasingly active new hardware category: portable AI-first devices that can interact with users in natural language, eschewing screens and app-based operating systems.


Microsoft's OpenAI Ties Face Potential E.U. Merger Probe

TIME - Tech

Microsoft Corp.'s 13 billion investment into OpenAI Inc. risks a full-blown investigation by European Union deals watchdogs, after a mutiny at the ChatGPT creator laid bare deep ties between the two companies. The European Commission said on Tuesday that it's examining whether Microsoft's involvement should be vetted under the bloc's merger rules -- paving the way for a formal probe and even a potential unwinding if it's found to hamper fair competition. The EU move, part of a broader look at artificial intelligence, follows a similar step by the UK's Competition and Markets Authority. "Virtual worlds and generative AI are rapidly developing," said Margrethe Vestager, the EU's antitrust commissioner. "It is fundamental that these new markets stay competitive, and that nothing stands in the way of businesses growing and providing the best and most innovative products to consumers."


How Smart Should Robots Be?

TIME - Tech

When people hear the words "social engineering," they usually think of the supposed nefarious designs of government or an opposing political party. These days, there's a general sense of social upheaval brought on by some invisible force, and we're anxious to blame someone. I can't help feeling that, to some extent, we're tilting at windmills while the real source of social engineering is in our pockets, on our laps, in a myriad of devices and soon, highly lifelike social robots for the home. The future is coming at us fast these days. In October 2023, Boston Dynamics, the robotics company that makes advanced robots that can dance better than some people, announced it had endowed Spot, its highly utilitarian doglike robot, with ChatGPT.


Judges in England and Wales Given Cautious Approval to Use AI in Writing Legal Opinions

TIME - Tech

England's 1,000-year-old legal system -- still steeped in traditions that include wearing wigs and robes -- has taken a cautious step into the future by giving judges permission to use artificial intelligence to help produce rulings. The Courts and Tribunals Judiciary last month said AI could help write opinions but stressed it shouldn't be used for research or legal analyses because the technology can fabricate information and provide misleading, inaccurate and biased information. "Judges do not need to shun the careful use of AI," said Master of the Rolls Geoffrey Vos, the second-highest ranking judge in England and Wales. "But they must ensure that they protect confidence and take full personal responsibility for everything they produce." At a time when scholars and legal experts are pondering a future when AI could replace lawyers, help select jurors or even decide cases, the approach spelled out Dec. 11 by the judiciary is restrained. But for a profession slow to embrace technological change, it's a proactive step as government and industry -- and society in general -- react to a rapidly advancing technology alternately portrayed as a panacea and a menace.


E.U. Reaches Deal on World's First Comprehensive AI Rules

TIME - Tech

European Union negotiators clinched a deal Friday on the world's first comprehensive artificial intelligence rules, paving the way for legal oversight of AI technology that has promised to transform everyday life and spurred warnings of existential dangers to humanity. Negotiators from the European Parliament and the bloc's 27 member countries overcame big differences on controversial points including generative AI and police use of face recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act. "The EU becomes the very first continent to set clear rules for the use of AI." The result came after marathon closed-door talks this week, with the initial session lasting 22 hours before a second round kicked off Friday morning. Officials were under the gun to secure a political victory for the flagship legislation.