This story is part of Future Tense Fiction, a monthly series of short stories from Future Tense and Arizona State University's Center for Science and the Imagination about how technology and science will change our lives. The other homesteaders, mostly engineers and technicians, seemed to enjoy outings in the lunar rover. But for Eugene, this was a grinding chore that frayed his nerves. Suddenly, Mel's soothing feminine voice reverberated in his cochlear implant. "Would you like some affirmations?" You are a well-respected judge … You have worked hard to get here, to this special time and place …" As Mel went on, it seemed the suit hugged his chest a little less tightly. He relaxed his grip on the wheel. Why, he wondered, had he not remembered this technique without her prompting? Strange how the basic principles of cognitive psych were always slipping from his mind. Fortunately, she was there to remind him. "You are someone who wants what is best for the American lunar community and ...
The lawyer representing victims of a fatal Tesla crash blamed the company's autopilot driver assistant system, saying that "a car company should never sell consumers experimental vehicles," in the opening statement of a California trial on Thursday. The case stems from a civil lawsuit alleging that the autopilot system caused the owner of a Tesla Model 3 car, Micah Lee, to suddenly veer off a highway east of Los Angeles at 65 mph (105 kph), where his car struck a palm tree and burst into flames. The 2019 crash killed Lee and seriously injured his two passengers, including an eight-year-old boy who was disemboweled, according to court documents. The lawsuit, filed against Tesla by the passengers and Lee's estate, accuses Tesla of knowing that autopilot and other safety systems were defective when it sold the car. Jonathan Michaels, an attorney for the plaintiffs, in his opening statement at the trial in Riverside, California, said that when the 37-year-old Lee bought Tesla's "full self-driving capability package" for $6,000 for his Model 3 in 2019, the system was in "beta", meaning it was not yet ready for release.
When I learned that Meta's programmers downloaded 183,000 books for a database to teach the company's generative A.I. machines how to write, I was curious whether any of my own books had been fed into the crusher. Alex Reisner of the Atlantic has provided a handy search tool--type in an author's name, out comes all of his or her books that the LLaMA used. I typed "Fred Kaplan" and found that three of my six books (1959, Dark Territory, and The Insurgents) had been assimilated into the digital Borg. My first reaction, like that of many other authors, was outrage at the violation. However, my second reaction--also, I assume, like that of many other authors--was outrage that the program didn't include my other three books (The Bomb, Daydream Believers, and The Wizards of Armageddon). Were there really 182,997 books that were better than those three?
Editor's note: This article is part of The Atlantic's series on Books3. You can search the database for yourself here, and read about its origins here. This summer, I reported on a data set of more than 191,000 books that were used without permission to train generative-AI systems by Meta, Bloomberg, and others. "Books3," as it's called, was based on a collection of pirated ebooks that includes travel guides, self-published erotic fiction, novels by Stephen King and Margaret Atwood, and a lot more. Books play a crucial role in the training of generative-AI systems.
Battles between human and artificial intelligence are no longer science fiction. The strikes in Hollywood led by the united guilds of actors and screenwriters have a common, intangible enemy: the algorithms and computer-generated imagery that are increasingly programmed by studios to render them redundant. In New York last week, a new front in that stand-off was opened by a group of American novelists – including John Grisham, Jodi Picoult and Jonathan Franzen – who are suing OpenAI, the creators of the ChatGPT program. The legal case may help to define and protect those increasingly porous boundaries between human creativity and the robots that mimic it. In the meantime, Amazon, these days flooded by self-published books written by AI, has taken its first half-hearted steps to curtail that practice.
The proposed class-action lawsuit filed late on Tuesday by the Authors Guild joins several others from writers, source code owners and visual artists against generative AI providers. In addition to Microsoft-backed OpenAI, similar lawsuits are pending against Meta Platforms and Stability AI over the data used to train their AI systems. Other authors involved in the latest lawsuit include The Lincoln Lawyer writer Michael Connelly and lawyer-novelists David Baldacci and Scott Turow. An OpenAI spokesperson said on Wednesday that the company respects authors' rights and is "having productive conversations with many creators around the world, including the Authors Guild". The suit was organised by the Authors Guild and also includes David Baldacci, Sylvia Day, Jonathan Franzen and Elin Hilderbrand, among others.
The lawsuit is the latest salvo in the ongoing debate over how AI tools should be trained and whether the companies behind them owe anything to the original creators of the training data. Large language models are generally trained on billions of sentences of text pulled from the internet, including news stories, Wikipedia and comments on social media sites. OpenAI and other AI companies such as Google and Microsoft do not say specifically what data they use, but AI critics have long suspected that it includes well-known collections of pirated books that have circulated online for years.
For a while now, Washington has been wrestling with two big forces shaping technology: social media and artificial intelligence. Who should do it--and how? Currently, Congress is considering a bill that would regulate how social media companies treat minors: the Kids Online Safety Act. Although it has bipartisan support, KOSA is not without controversy. Several critics have called it "government censorship." One group, the Electronic Frontier Foundation, says it is "one of the most dangerous bills in years."
Fox News White House correspondent Peter Doocy provides details on the latest revelations from the Hunter Biden investigation. Hunter Biden filed a lawsuit against former President Donald Trump aide Garrett Ziegler on Wednesday, alleging that Ziegler had violated federal computer laws by hacking into the now-infamous laptop that was left in a Delaware repair shop in 2019. The lawsuit, filed in Los Angeles, accuses Ziegler and his company -- Marco Polo USA -- and 10 unidentified associates of spreading "tens of thousands of emails, thousands of photos, and dozens of videos and recordings" that were considered "pornographic" on the laptop. Ziegler's company website claims to be a nonprofit research group "exposing corruption & blackmail." The website has several sections pertaining to Biden's laptop, including his emails, text messages, phone calls and financial data that culminates into a massive "online searchable database."
Haywood Talcove, CEO of LexisNexis Risk Solutions' government group, tells Fox News Digital that criminal groups, mostly in other countries, are advertising on social media to market their AI capabilities for fraud and other crimes. A tech company that boasts about its ability to use artificial intelligence to predict crime is in the midst of a privacy lawsuit with Meta, formerly Facebook, that wants it banned from the social media platform. The New York City and Los Angeles police departments, two of the U.S.'s largest police agencies, are among a growing list of law enforcement agencies in the U.S. and around the world to contract with Voyager Labs. In 2018, the New York Police Department agreed to a nearly $9 million deal with Voyager Labs, which claims it can use AI to predict crimes, according to documents obtained by the Surveillance Technology Oversight Project (STOP), The Guardian reported. The company bills itself as a "world leader" in AI-based analytics investigations that can comb through mounds of information from all corners of the internet – including social media and the dark web – to provide insight, uncover potential risks and predict future crimes.