Goto

Collaborating Authors

 hood


Big tech has spent 155bn on AI this year. It's about to spend hundreds of billions more

The Guardian

The US's largest companies have spent 2025 locked in a competition to spend more money than one another, lavishing 155bn on the development of artificial intelligence, more than the US government has spent on education, training, employment and social services in the 2025 fiscal year so far. Based on the most recent financial disclosures of Silicon Valley's biggest players, the race is about to accelerate to hundreds of billions in a single year. Over the past two weeks, Meta, Microsoft, Amazon, and Alphabet, Google's parent, have shared their quarterly public financial reports. Each disclosed that their year-to-date capital expenditure, a figure that refers to the money companies spend to acquire or upgrade tangible assets, already totals tens of billions. Capex, as the term is abbreviated, is a proxy for technology companies' spending on AI because the technology requires gargantuan investments in physical infrastructure, namely data centers, which require large amounts of power, water and expensive semiconductor chips.


Shocking moment Tesla mows down deer at full speed while in self-drive mode

Daily Mail - Science & tech

This is the shocking moment a Tesla in'Full Self-Driving' (FSD) mode plowed into a deer standing in the middle of the road. The driver, Paul S, did not confirm when or where the crash occurred, or what model Tesla he was driving. But dashcam footage shows the vehicle driving down a clear two-lane highway at night moments before the animal suddenly came into view. The Tesla rammed directly into the deer, without stopping or slowing down'even after hitting the deer on full speed,' Paul said. 'Huge surprise after getting a dozen false stops every day!' he added.


I Saw the Future of the City in Los Angeles. Now, the City Has to Make a Choice.

Slate

I saw two visions of the future in Los Angeles last weekend. First, a Waymo Jaguar I-PACE pulled over to pick me up on a busy street in downtown L.A., spinning lidar sensors mounted on the hood like a second set of side mirrors. We inched comfortably through stop-and-go Saturday afternoon traffic and made an impressive left turn ahead of two lanes of oncoming cars as I said my prayers in the passenger seat. On the other hand, the robot lost its nerve trying to turn right across a crosswalk. As pedestrians cleared and the light turned from green to yellow to red, the Waymo remained fixed to the spot.


Microsoft's OpenAI partnership was born from Google envy

Engadget

It turns out the lay of today's AI landscape can be traced back to -- what do you know -- fear, jealousy and intense capitalist ambition. Emails revealed in the Department of Justice's antitrust case against Google, first reported by Business Insider, show Microsoft executives expressing alarm and envy over Google's AI lead. That spurred an urgency that led to the Windows maker's initial billion-dollar investment in its now-indispensable partner, OpenAI. In a heavily redacted 2019 email thread titled "Thoughts on OpenAI," Microsoft CEO Satya Nadella forwards a lengthy message from CTO Kevin Scott to CFO Amy Hood. "Very good email that explains, why I want us to do this ... and also why we will then ensure our infra folks execute," Nadella wrote.


AI Tools Are Still Generating Misleading Election Images

WIRED

Despite years of evidence to the contrary, many Republicans still believe that President Joe Biden's win in 2020 was illegitimate. A number of election denying candidates won their primaries during Super Tuesday, including Brandon Gill, the son-in-law of right-wing pundit Dinesh D'Souza and promoter of the debunked 2000 Mules film. Going into this year's elections, claims of election fraud remain a staple for candidates running on the right, fueled by dis- and misinformation, both online and off. And the advent of generative AI has the potential to make the problem worse. A new report from the Center for Countering Digital Hate (CCDH), a nonprofit that tracks hate speech on social platforms, found that even though generative AI companies say they've put policies in place to prevent their image-creating tools from being used to spread election-related disinformation, researchers were able to circumvent their safeguards and create the images anyway.


New AI video tools increase worries of deepfakes ahead of elections

Al Jazeera

The video that OpenAI released to unveil its new text-to-video tool, Sora, has to be seen to be believed. The demonstration reportedly prompted movie producer Tyler Perry to pause an 800m studio investment. Tools like Sora promise to translate a user's vision into realistic moving images with a simple text prompt, the logic goes, making studios obsolete. Others worry that artificial intelligence (AI) like this could be exploited by those with darker imaginations. Malicious actors could use these services to create highly realistic deepfakes, confusing or misleading voters during an election or simply causing chaos by seeding divisive rumours.


San Francisco crowd sets self-driving car on fire

Washington Post - Technology News

Vandi told Reuters in a direct message on X, formerly Twitter, that people were celebrating the Lunar New Year on Saturday evening by setting off fireworks. He said he saw a person jump onto the hood of the vehicle and break its windshield, and another later jumped onto the hood as the crowd clapped. Vandi could not be reached for comment Monday morning.


HOOD: Real-Time Robust Human Presence and Out-of-Distribution Detection with Low-Cost FMCW Radar

Kahya, Sabri Mustafa, Yavuz, Muhammet Sami, Steinbach, Eckehard

arXiv.org Artificial Intelligence

Human presence detection in indoor environments using millimeter-wave frequency-modulated continuous-wave (FMCW) radar is challenging due to the presence of moving and stationary clutters in indoor places. This work proposes "HOOD" as a real-time robust human presence and out-of-distribution (OOD) detection method by exploiting 60 GHz short-range FMCW radar. We approach the presence detection application as an OOD detection problem and solve the two problems simultaneously using a single pipeline. Our solution relies on a reconstruction-based architecture and works with radar macro and micro range-Doppler images (RDIs). HOOD aims to accurately detect the "presence" of humans in the presence or absence of moving and stationary disturbers. Since it is also an OOD detector, it aims to detect moving or stationary clutters as OOD in humans' absence and predicts the current scene's output as "no presence." HOOD is an activity-free approach that performs well in different human scenarios. On our dataset collected with a 60 GHz short-range FMCW Radar, we achieve an average AUROC of 94.36%. Additionally, our extensive evaluations and experiments demonstrate that HOOD outperforms state-of-the-art (SOTA) OOD detection methods in terms of common OOD detection metrics. Our real-time experiments are available at: https://muskahya.github.io/HOOD


Protesters develop novel way to build cone-sensus against driverless cars

The Guardian

A group of San Francisco organizers are encouraging people to put traffic cones on the hoods of driverless vehicles as a form of protest against the cars' expansion on city streets. A video of the group's actions with step-by-step instructions on how to disable a robo-taxi with a cone has gone viral on Twitter and sparked intense debates about the pros and cons of autonomous vehicles and the value of protesting in this way. Safe Street Rebel, a group of organizers that advocate for pedestrian safety and reducing the number of cars on roads, are behind this stunt. They hope that it will raise the public's awareness of the potential dangers driverless taxis pose before a pivotal vote by the California public utilities commission set to take place on 13 July. The vote would allow Cruise, a company controlled by the automaker General Motors, and Waymo, a Google spinoff, to charge people for rides as a part of the state's driverless autonomous vehicles passenger service deployment program, according to the meeting agenda.


OpenAI Threatened With Lawsuit Over ChatGPT Defamation

#artificialintelligence

For the first time, OpenAI may face a lawsuit over ChatGPT-generated defamation. An Australian mayor named Brian Hood, who according to Reuters is peeved about the fact that ChatGPT wrongfully identified him as a guilty party in a "foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s," apparently claiming that Hood had even served prison time for his so-called crime. Hood was involved in the scandal -- but as the whistleblower, not the crime-doer. Yeah, we'd be pissed, too. Per Reuters, Hood's lawyers sent a "letter of concern" to OpenAI back on March 21 demanding that the company fix its chatbot's error within 28 days.