Litigation
Elon Musk files an injunction to stop OpenAI from becoming a for-profit
Elon Musk asked a court to stop Sam Altman, Greg Brockman, OpenAI, and other co-defendants from transitioning the ChatGPT maker into a for-profit enterprise. Per TechCrunch, Musk filed a motion on Friday evening in the U.S. District Court for the Northern District of California, accusing Altman, Brockman, OpenAI board members, and stakeholder Microsoft of "violating the terms of Musk's foundational contributions to the charity," and engaging in anticompetitive behavior as OpenAI seeks to convert from non-profit to for-profit status. "Plaintiffs and the public need a pause," said the court filing. "OpenAI's path from a non-profit to for-profit behemoth is replete with per se anticompetitive practices, flagrant breaches of its charitable mission, and rampant self-dealing." Musk was an early investor and board member of OpenAI, but broke ties with the company in 2018.
Elon Musk asks court to stop OpenAI from becoming a for-profit
Elon Musk's attorneys filed for an injunction against OpenAI and Microsoft on Friday accusing the two of anticompetitive practices and seeking to stop OpenAI's conversion to a for-profit company. The filing, spotted by TechCrunch, also names OpenAI CEO Sam Altman, OpenAI President Greg Brockman, Microsoft's Dee Templeton and LinkedIn co-founder Reid Hoffman as defendants. Musk first sued OpenAI earlier this year for allegedly violating its founding mission of building AI "for the benefit of humanity," but withdrew the lawsuit a few months later. He then filed another lawsuit against OpenAI in a California federal court in August, and recently added Microsoft as a defendant. The new motion accuses OpenAI and Microsoft of telling investors not to fund OpenAI's competitors, such as Musk's xAI, of "benefitting from wrongfully obtained competitively sensitive information or coordination" through its relationship with Microsoft, and other alleged antitrust violations.
Canadian publishers take OpenAI to court
In the newest legal battle between artificial intelligence and pretty much everybody else, OpenAI is once again on the chopping block. The group is seeking up to 20,000 Canadian for each article used by OpenAI, The Guardian reported. "Rather than seek to obtain the information legally, OpenAI has elected to brazenly misappropriate the News Media Companies' valuable intellectual property and convert it for its own uses, including commercial uses, without consent or consideration," the filing, which The Verge published, reads. The filing goes on to allege that OpenAI has "capitalized on the commercial success of its GPT models, building an expansive suite of GPT-based products and services, and raising significant capital -- all without obtaining a valid license from any of the News Media Companies. In doing so, OpenAI has been substantially and unjustly enriched to the detriment of the News Media Companies."
Canadian news organizations sue OpenAI for ChatGPT copyright infringement
The joint lawsuit accuses the company of "capitalizing and profiting" from the unauthorized use of their content for ChatGPT. The legal action was filed in the Ontario Superior Court of Justice. The plaintiffs include CBC/Radio-Canada, Postmedia, Metroland, the Toronto Star, the Globe and Mail and The Canadian Press. They're seeking punitive damages from OpenAI, payments for any profits the ChatGPT creator made from using their news articles and a ban on further use of their content. "OpenAI is capitalizing and profiting from the use of this content, without getting permission or compensating content owners."
Luxury brands are betting big on India, and so are counterfeiters
New Delhi/Kolkata, India โ A pair of black Dandy Pik Pik loafers covered in sharp, uneven spikes and shiny studs was part of the evidence before Judge Pratibha M Singh in an intellectual-property lawsuit brought by French luxury shoe brand Christian Louboutin against an Indian shoe manufacturer in a Delhi high court last year. Louboutin's lawyers had already regaled the court with anecdotes about the iconic status of their shoes. The signature stilettos, with their luxuriant red soles, had starred in movies like The Devil Wears Prada and Sex and The City, and were registered as a trademark in India and other countries, they said. Riding on the brand's reputation, the lawyers were now trying to make the point that spiked shoes, too, were unique to Christian Louboutin, and the defendant, Shutiq โ The Shoe Boutique, was manufacturing and selling their designs in India illegally. Incriminating evidence presented to Judge Singh included testimony from ChatGPT, saying that Christian Louboutin is known for spiked men's shoes. Then there were photographs of Shutiq's 26 spiked and bedazzled shoes next to Louboutin originals, including Dandy Pik Pik.
Stanford prof accused of using AI to fake testimony in Minnesota case against conservative YouTuber
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. A Stanford University "misinformation expert" has been accused of using artificial intelligence (AI) to craft testimony later used by Minnesota Attorney General Keith Ellison in a politically-charged case. Jeff Hancock, a professor of communications and founder of the vaunted school's Social Media Lab, provided an expert declaration in a case involving a satirical conservative YouTuber named Christopher Kohls. The court case is about Minnesota's recent ban on political deepfakes, which the plaintiffs argue is an attack on free speech.
The New York Times says OpenAI deleted evidence in its copyright lawsuit
Astrophysicist Stephen Hawking told Last Week Tonight's John Oliver a chilling but memorable hypothetical story a decade ago about the potential dangers of AI. The gist is a group of scientists build a superintelligent computer and ask it, "Is there a God?" The computer answers, "There is now" and a bolt of lightning zaps the plug preventing it from being shut down. Let's hope that's not what happened with OpenAI and some missing evidence from the New York Times' plagiarism lawsuit. Wired reported that a court declaration filed by the New York Times on Wednesday says that OpenAI's engineers accidentally erased evidence of the AI's training data that took a long time to research and compile.
OpenAI accidentally deleted potential evidence in New York Times copyright lawsuit case
First reported by TechCrunch, counsel for the Times and its co-plaintiff Daily News sent a letter to the judge overseeing the case, detailing how "an entire week's worth of its experts' and lawyers' work" was "irretrievably lost." According to the letter, on Nov. 14, "programs and search result data stored on one of the dedicated virtual machines was erased by OpenAI engineers." The case hinges on the Times being able to prove that OpenAI's models copied and used its content without compensation or credit. OpenAI was able to recover most of the erased data, but the "folder structure and file names" of the work was unrecoverable, rendering the data unusable. Now, the plaintiffs' counsel must start their evidence gathering from scratch.
New York Times Says OpenAI Erased Potential Lawsuit Evidence
This week, the Times alleged that OpenAI's engineers inadvertently erased data the paper's team spent more than 150 hours extracting as potential evidence. OpenAI was able to recover much of the data, but the Times' legal team says it's still missing the original file names and folder structure. According to a declaration filed to the court Wednesday by Jennifer B. Maisel, a lawyer for the newspaper, this means the information "cannot be used to determine where the news plaintiffs' copied articles" may have been incorporated into OpenAI's artificial intelligence models. "We disagree with the characterizations made and will file our response soon," OpenAI spokesperson Jason Deutrom told WIRED in a statement. The New York Times declined to comment.
Four ways to protect your art from AI
Artists and writers have launched several lawsuits against AI companies, arguing that their work has been scraped into databases for training AI models without consent or compensation. Tech companies have responded that anything on the public internet falls under fair use. But it will be years until we have a legal resolution to the problem. Unfortunately, there is little you can do if your work has been scraped into a data set and used in a model that is already out there. You can, however, take steps to prevent your work from being used in the future.