Goto

Collaborating Authors

 platform


Training Deep Neural Networks with 8-bit Floating Point Numbers

Neural Information Processing Systems

The state-of-the-art hardware platforms for training deep neural networks are moving from traditional single precision (32-bit) computations towards 16 bits of precision - in large part due to the high energy efficiency and smaller bit storage associated with using reduced-precision representations. However, unlike inference, training with numbers represented with less than 16 bits has been challenging due to the need to maintain fidelity of the gradient computations during back-propagation. Here we demonstrate, for the first time, the successful training of deep neural networks using 8-bit floating point numbers while fully maintaining the accuracy on a spectrum of deep learning models and datasets. In addition to reducing the data and computation precision to 8 bits, we also successfully reduce the arithmetic precision for additions (used in partial product accumulation and weight updates) from 32 bits to 16 bits through the introduction of a number of key ideas including chunk-based accumulation and floating point stochastic rounding. The use of these novel techniques lays the foundation for a new generation of hardware training platforms with the potential for 2-4 times improved throughput over today's systems.


Where OpenAI's technology could show up in Iran

MIT Technology Review

Where OpenAI's technology could show up in Iran Three places to watch, from the margins of war to the center of combat. It's been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI's agreement allows for; Sam Altman said the military can't use his company's technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI's other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious . It's not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it's just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads).


Tech companies are teaming up to combat scammers

Engadget

The Online Services Accord Against Scams was signed by major tech companies including Google, Microsoft and OpenAI. A coalition of Big Tech companies is working on a more comprehensive solution to combat online scams . As first reported by, Google, Microsoft, LinkedIn, Meta, Amazon, OpenAI, Adobe and Match Group announced the signing of the Online Services Accord Against Scams. The new agreement is meant to put up a united industry-wide front against online fraud and scams, particularly those from sophisticated criminal networks that use multiple platforms. According to the report, the measures will include adding fraud detection tools, introducing new user security features, and requiring more robust verification for financial transactions.


Wall Street Is Already Betting on Prediction Markets

WIRED

As the legal war over how to regulate prediction markets rages on, financial institutions are embracing the industry anyway. When Troy Dixon first suggested incorporating prediction markets into the electronic trading platform where he works, he was met with incredulity. "People told us we were crazy," Dixon, Tradeweb's cohead of global markets, tells WIRED. But after the company announced it was partnering with Kalshi in February, Dixon says, the mood changed dramatically. "We've been inundated with calls," he says.


Trump accuses Iran of using AI to spread disinformation

The Japan Times

U.S. President Donald Trump speaks to reporters aboard Air Force One on a flight to Washington on Sunday. SAN FRANCISCO - U.S. President Donald Trump on Sunday accused Iran of using artificial intelligence as a "disinformation weapon" to misrepresent its wartime successes and support. "AI can be very dangerous, we have to be very careful with it," Trump said to reporters on Air Force One shortly after he made a post on his Truth Social platform where he accused Western media outlets without evidence of "close coordination" with Iran to spread AI-generated fake news." The comments come amid renewed tensions between the Federal Communications Commission and broadcasters after Trump took aim at media coverage of the U.S. and Israel's war with Iran. FCC Chairman Brendan Carr on Saturday threatened to pull licenses of broadcasters who did not "correct course" on their coverage.


Top AI ethics and policy issues of 2025 and what to expect in 2026

AIHub

This happened as generative and agentic systems became essential in key sectors worldwide. This feature highlights the major AI ethics and policy developments of 2025, and concludes with a forward-looking perspective on the ethical and policy challenges likely to shape 2026.


Grammarly pulls AI author-impersonation tool after backlash

BBC News

Writing tool Grammarly has disabled an AI feature which mimicked personas of prominent writers, including Stephen King and scientist Carl Sagan, following a backlash from people impersonated. The Expert Review function, which offered writing feedback inspired by the styles of famous authors and academics, was taken down this week by Superhuman, the tech firm which runs Grammarly. The feature was met with resistance, including a multi-million dollar lawsuit, from writers who found their names and reputations used as AI personas without their consent. Shishir Mehrotra, the firm's chief executive, apologised on LinkedIn, acknowledging the tool had misrepresented the voices of experts. Investigative journalist Julia Angwin, a New York Times contributing opinion writer, is the lead plaintiff in a class-action lawsuit filed against Superhuman and Grammarly in the Southern District of New York.


Building a strong data infrastructure for AI agent success

MIT Technology Review

As companies race to adopt agentic AI to spur innovation and gain efficiency, building the right enterprise data infrastructure has become a critical component of success. In the race to adopt and show value from AI, enterprises are moving faster than ever to deploy agentic AI as copilots, assistants, and autonomous task-runners. In late 2025, nearly two-thirds of companies were experimenting with AI agents, while 88% were using AI in at least one business function, up from 78% in 2024, according to McKinsey's annual AI report . Yet, while early pilots often succeed, only one in 10 companies actually scaled their AI agents. One major issue: AI agents are only as effective as the data foundation supporting them. Experts argue that most companies are seeing delays in implementing AI, not because of shortcomings in the models, but because they lack data architectures that deliver business context to be reliably used by humans and agents.


Schools are using AI counselors to track students' mental health. Is it safe?

The Guardian

'You can't replace human connection, human judgment,' warns Sarah Caliboso-Soto, a licensed clinical social worker. 'You can't replace human connection, human judgment,' warns Sarah Caliboso-Soto, a licensed clinical social worker. Schools are using AI counselors to track students' mental health. As hundreds of schools implement an automated monitoring tool, educators say that students can find talking to a chatbot'more natural' than confiding in a human The alert came around 7pm. Brittani Phillips checked her phone. A middle school counselor in Putnam county, Florida, Phillips receives messages from an artificial intelligence-enabled therapy platform that students use during nonschool hours.


The Search Engine for OnlyFans Models Who Look Like Your Crush

WIRED

Presearch's "Doppelgänger" is trying to help people discover adult creators rather than use nonconsensual deepfakes. For three days in February, porn star Alix Lynx flew to Miami for her first exclusive creator gathering where she was in full grind mode: shooting Reels and talking strategy with other creators. "It was kind of like SoHo House for OnlyFans girls," she says of the experience, which is called The Circle and drew more than a dozen sex workers, including Remy LaCroix and Forrest Smith. Lynx, who is a former webcam model turned OnlyFans starlet, has a combined 2 million followers across Instagram, TikTok, and X . She joined OnlyFans in 2017 with "the luxury of having my own following," she says, but those numbers haven't always translated to subscriptions. It's why she was in Miami.