Goto

Collaborating Authors

 Generative AI


Left-leaning actress Natasha Lyonne leading efforts to lobby Trump admin on AI regulation

FOX News

The'Outnumbered' panel discusses how celebrities have reportedly authored a letter to President Donald Trump seeking his protection against artificial intelligence after smearing his name for years. Left-leaning actress Natasha Lyonne is at the forefront of Hollywood efforts to get the government to address creators' concerns about AI infringing on their work. "My primary interest is that people get paid for their life's work," Lyonne said in a report in the Wall Street Journal. The story detailed Lyonne's efforts to lobby Hollywood heavyweights to sign onto her letter to the Trump administration in March, urging against the loosening of regulations around AI, which they deem a potential threat to their intellectual property without proper protections in place. REPUBLICANS SCRAP DEAL IN'BIG, BEAUTIFUL BILL' TO LOWER RESTRICTIONS ON STATES' AI REGULATIONS Natasha Lyonne is among Hollywood figures lobbying the Trump administration to rein in AI. (Photo by Araya Doheny/Getty Images) The Hollywood letter said the companies "are arguing for a special government exemption so they can freely exploit America's creative and knowledge industries, despite their substantial revenues and available funds. Lyonne and more than 400 others, including such figures as Paul McCartney, Ron Howard and Ben Stiller, signed the letter. Lyonne, known for her roles in the series "Poker Face" and "Russian Doll," is a partner in a new studio called Asteria, which describes itself as "an artist-led generative AI film and animation studio powered by the first clean and ethical AI model." Like many figures in Hollywood, Lyonne is not a fan of the president, endorsing Kamala Harris in 2024 and posting in a now-deleted X post in 2020 about turning Texas blue to defeat Trump. Earlier this year, she told The Hollywood Reporter she was concerned for marginalized communities, saying of Trump, "It's very weird to have like a showbiz guy in charge, is surreal.


Perplexity adds a Max tier just as expensive as its rivals

Mashable

Perplexity has added another subscription tier, one that the company calls its "most powerful." And it's got a price tag to match. Announced by the Nvidia- and Jeff Bezos-backed company in a Wednesday blog post, Perplexity Max is the new premium offering for the AI search engine. An upgrade for the existing Perplexity Pro tier, Max is now available on iOS and web app, coming soon to Android. As well as everything included in Pro, Max includes unlimited use of Labs (the AI tool that can generate projects like reports, presentations, and simple web apps in about 10 minutes), early access to new features like the upcoming Comet agentic search browser, access to advanced AI models like Anthropic's Claude Opus 4 and OpenAI o3-pro and "priority support" for these models. It's the same monthly price as the top tier of OpenAI's offering, ChatGPT Pro; in comparison, the most expensive tier of Google's AI, Gemini, is Google AI Ultra, which costs 249.99 per month.


Cloudflare just changed the internet, and it's bad news for the AI giants

ZDNet

The major internet Content Delivery Network (CDN), Cloudflare, has declared war on AI companies. Starting July 1, Cloudflare now blocks by default AI web crawlers accessing content from your websites without permission or compensation. The change addresses a real problem. My own small site, where I track all my stories, Practical Technology, has been slowed dramatically at times by AI crawlers. Numerous website owners have reported that AI crawlers, such as OpenAI's GPTBot and Anthropic's ClaudeBot, generate massive volumes of automated requests that clog up websites so they're as slow as sludge.


The Download: how AI could improve construction site safety, and our Roundtables conversation with Karen Hao

MIT Technology Review

More than 1,000 construction workers die on the job each year in the US, making it the most dangerous industry for fatal slips, trips, and falls. A new AI tool called Safety AI could help to change that. It analyzes the progress made on a construction site each day, and flags conditions that violate Occupational Safety and Health Administration rules, with what its creator Philip Lorenzo claims is 95% accuracy. Lorenzo says Safety AI is the first one of multiple emerging AI construction safety tools to use generative AI to flag safety violations. But as the 95% success rate suggests, Safety AI is not a flawless and all-knowing intelligence.


How generative AI could help make construction sites safer

MIT Technology Review

To combat the shortcuts and risk-taking, Lorenzo is working on a tool for the San Franciscoโ€“based company DroneDeploy, which sells software that creates daily digital models of work progress from videos and images, known in the trade as "reality capture." The tool, called Safety AI, analyzes each day's reality capture imagery and flags conditions that violate Occupational Safety and Health Administration (OSHA) rules, with what he claims is 95% accuracy. That means that for any safety risk the software flags, there is 95% certainty that the flag is accurate and relates to a specific OSHA regulation. Launched in October 2024, it's now being deployed on hundreds of construction sites in the US, Lorenzo says, and versions specific to the building regulations in countries including Canada, the UK, South Korea, and Australia have also been deployed. Safety AI is one of multiple AI construction safety tools that have emerged in recent years, from Silicon Valley to Hong Kong to Jerusalem.


Cloudflare declares war on AI crawlers - and the stakes couldn't be higher

ZDNet

The major Internet Content Delivery Network (CDN), Cloudflare, has declared war on AI companies. Starting July 1, Cloudflare now blocks by default AI web crawlers accessing content from your websites without permission or compensation. The change addresses a real problem. My own small site, where I track all my stories, Practical Technology, has been slowed dramatically at times by AI crawlers. Numerous website owners have reported that AI crawlers, such as OpenAI's GPTBot and Anthropic's ClaudeBot, generate massive volumes of automated requests that clog up websites so they're as slow as sludge.


Here's What Mark Zuckerberg Is Offering Top AI Talent

WIRED

As Mark Zuckerberg staffs up Meta's new superintelligence lab, he's offering top research talent pay packages of up to 300 million over four years, with more than 100 million in total compensation for the first year, WIRED has learned. Meta has made at least 10 of these staggeringly high offers to OpenAI staffers. One high-ranking researcher was pitched on the role of chief scientist but turned it down, according to multiple sources with direct knowledge of the negotiations. While the pay package includes equity, in the first year the stock vests immediately, sources say. "That's about how much it would take for me to go work at Meta," says one OpenAI staffer who spoke with WIRED on the condition of anonymity as they aren't authorized to speak publicly about the company.


Sam Altman Slams Meta's AI Talent Poaching Spree: 'Missionaries Will Beat Mercenaries'

WIRED

OpenAI CEO Sam Altman is hitting back at Meta CEO Mark Zuckerberg's recent AI talent poaching spree. In a full-throated response sent to OpenAI researchers Monday evening and obtained by WIRED, Altman made his pitch for why staying at OpenAI is the only answer for those looking to build artificial general intelligence, hinting that the company is evaluating compensation for the entire research organization. He also dismissed Meta's recruiting efforts, saying what the company is doing could lead to deep cultural problems down the road. "We have gone from some nerds in the corner to the most interesting people in the tech industry (at least)," he wrote on Slack. "Al Twitter is toxic; Meta is acting in a way that feels somewhat distasteful; I assume things will get even crazier in the future. After I got fired and came back I said that was not the craziest thing that would happen in OpenAl history; certainly neither is this."


Meta's new AI lab aims to deliver 'personal superintelligence for everyone' - whatever that means

ZDNet

Meta has launched a new internal R&D division devoted to building artificial superintelligence, Bloomberg reported on Monday. The division, called Meta Superintelligence Labs (MSL), will be led by Alexandr Wang and Nat Friedman, the former CEOs of Scale AI and GitHub, respectively. It will also be joined by seven ex-OpenAI engineers, according to an internal memo from CEO Mark Zuckerberg obtained by CNBC. Meta's plans to launch the new division were first reported last month. Microsoft's tool is 4 times better for complex cases "As the pace of AI progress accelerates, developing superintelligence is coming into sight," Zuckerberg wrote in the memo.


Meta poaching talent from OpenAI hits meme status

Mashable

Talent poaching doesn't often go viral, but this story has all the right ingredients: Meta, OpenAI, and 100 million signing bonuses. Meta has been aggressively pursuing AI talent from top labs, mostly OpenAI, in an effort to get a leg up on its competition. The details of the developing story are so dramatic and foreign to the rest of the world that it's become stranger-than-fiction fodder for the internet watching it unfold in real time. In case you haven't been following the saga, Meta CEO Mark Zuckerberg is frustrated with his company's progress in the quest for AI supremacy. Earlier this month, reports surfaced of Zuckerberg personally recruiting talent for a superintelligence group to accelerate Meta's AI research and development efforts.