platform
Child abuse material 'systemic' on Elon Musk's X amid Grok scandal, Australian online safety regulator warned
Australia's eSafety commissioner wrote to X in January after its AI chatbot Grok was used to generate sexualised images of women and children online. Australia's eSafety commissioner wrote to X in January after its AI chatbot Grok was used to generate sexualised images of women and children online. Child abuse material'systemic' on Elon Musk's X amid Grok scandal, Australian online safety regulator warned The Australian online safety regulator warned Elon Musk's X amid the Grok sexualised image generation scandal that it found child abuse material was "particularly systemic" on X and more accessible than on "any other mainstream service", correspondence obtained by Guardian Australia reveals. The eSafety commissioner wrote to X in January after its chatbot, Grok, was used to generate sexualised images of women and children online, which the prime minister, Anthony Albanese, described as "abhorrent". In the letter, obtained by Guardian Australia under freedom of information laws, eSafety's general manager of regulatory operations, Heidi Snell, pointed to Musk's promise when taking over the platform in 2022 that "removing child exploitation is priority #1", but said "the availability of CSEM [child sexual exploitation material] continues to appear particularly systemic on X".
- Oceania > Australia (0.95)
- North America > United States (0.31)
- Europe > Ukraine (0.06)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
Training Deep Neural Networks with 8-bit Floating Point Numbers
The state-of-the-art hardware platforms for training deep neural networks are moving from traditional single precision (32-bit) computations towards 16 bits of precision - in large part due to the high energy efficiency and smaller bit storage associated with using reduced-precision representations. However, unlike inference, training with numbers represented with less than 16 bits has been challenging due to the need to maintain fidelity of the gradient computations during back-propagation. Here we demonstrate, for the first time, the successful training of deep neural networks using 8-bit floating point numbers while fully maintaining the accuracy on a spectrum of deep learning models and datasets. In addition to reducing the data and computation precision to 8 bits, we also successfully reduce the arithmetic precision for additions (used in partial product accumulation and weight updates) from 32 bits to 16 bits through the introduction of a number of key ideas including chunk-based accumulation and floating point stochastic rounding. The use of these novel techniques lays the foundation for a new generation of hardware training platforms with the potential for 2-4 times improved throughput over today's systems.
Where OpenAI's technology could show up in Iran
Where OpenAI's technology could show up in Iran Three places to watch, from the margins of war to the center of combat. It's been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI's agreement allows for; Sam Altman said the military can't use his company's technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI's other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious . It's not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it's just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads).
- Asia > Middle East > Iran (0.64)
- North America > United States > Massachusetts (0.05)
- Asia > Middle East > Kuwait (0.05)
- Asia > China (0.05)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.48)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Tech companies are teaming up to combat scammers
The Online Services Accord Against Scams was signed by major tech companies including Google, Microsoft and OpenAI. A coalition of Big Tech companies is working on a more comprehensive solution to combat online scams . As first reported by, Google, Microsoft, LinkedIn, Meta, Amazon, OpenAI, Adobe and Match Group announced the signing of the Online Services Accord Against Scams. The new agreement is meant to put up a united industry-wide front against online fraud and scams, particularly those from sophisticated criminal networks that use multiple platforms. According to the report, the measures will include adding fraud detection tools, introducing new user security features, and requiring more robust verification for financial transactions.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.41)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Communications > Social Media (0.92)
- (3 more...)
Wall Street Is Already Betting on Prediction Markets
As the legal war over how to regulate prediction markets rages on, financial institutions are embracing the industry anyway. When Troy Dixon first suggested incorporating prediction markets into the electronic trading platform where he works, he was met with incredulity. "People told us we were crazy," Dixon, Tradeweb's cohead of global markets, tells WIRED. But after the company announced it was partnering with Kalshi in February, Dixon says, the mood changed dramatically. "We've been inundated with calls," he says.
- North America > United States > New York > New York County > New York City (0.42)
- Asia > Middle East > Iran (0.17)
- Asia > Middle East > UAE (0.05)
- (9 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Trading (1.00)
Trump accuses Iran of using AI to spread disinformation
U.S. President Donald Trump speaks to reporters aboard Air Force One on a flight to Washington on Sunday. SAN FRANCISCO - U.S. President Donald Trump on Sunday accused Iran of using artificial intelligence as a "disinformation weapon" to misrepresent its wartime successes and support. "AI can be very dangerous, we have to be very careful with it," Trump said to reporters on Air Force One shortly after he made a post on his Truth Social platform where he accused Western media outlets without evidence of "close coordination" with Iran to spread AI-generated fake news." The comments come amid renewed tensions between the Federal Communications Commission and broadcasters after Trump took aim at media coverage of the U.S. and Israel's war with Iran. FCC Chairman Brendan Carr on Saturday threatened to pull licenses of broadcasters who did not "correct course" on their coverage.
- Asia > Middle East > Iran (1.00)
- North America > United States > California > San Francisco County > San Francisco (0.25)
- Asia > Middle East > Israel (0.25)
- (7 more...)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- North America > United States > Vermont (0.05)
- Europe > United Kingdom > England (0.04)
- Asia > Singapore (0.04)
- Asia > Japan > Honshū > Chūgoku > Hiroshima Prefecture > Hiroshima (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance (1.00)
- (3 more...)
Grammarly pulls AI author-impersonation tool after backlash
Writing tool Grammarly has disabled an AI feature which mimicked personas of prominent writers, including Stephen King and scientist Carl Sagan, following a backlash from people impersonated. The Expert Review function, which offered writing feedback inspired by the styles of famous authors and academics, was taken down this week by Superhuman, the tech firm which runs Grammarly. The feature was met with resistance, including a multi-million dollar lawsuit, from writers who found their names and reputations used as AI personas without their consent. Shishir Mehrotra, the firm's chief executive, apologised on LinkedIn, acknowledging the tool had misrepresented the voices of experts. Investigative journalist Julia Angwin, a New York Times contributing opinion writer, is the lead plaintiff in a class-action lawsuit filed against Superhuman and Grammarly in the Southern District of New York.
- North America > United States > New York (0.25)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (12 more...)
- Leisure & Entertainment (1.00)
- Law > Litigation (1.00)
Building a strong data infrastructure for AI agent success
As companies race to adopt agentic AI to spur innovation and gain efficiency, building the right enterprise data infrastructure has become a critical component of success. In the race to adopt and show value from AI, enterprises are moving faster than ever to deploy agentic AI as copilots, assistants, and autonomous task-runners. In late 2025, nearly two-thirds of companies were experimenting with AI agents, while 88% were using AI in at least one business function, up from 78% in 2024, according to McKinsey's annual AI report . Yet, while early pilots often succeed, only one in 10 companies actually scaled their AI agents. One major issue: AI agents are only as effective as the data foundation supporting them. Experts argue that most companies are seeing delays in implementing AI, not because of shortcomings in the models, but because they lack data architectures that deliver business context to be reliably used by humans and agents.
Schools are using AI counselors to track students' mental health. Is it safe?
'You can't replace human connection, human judgment,' warns Sarah Caliboso-Soto, a licensed clinical social worker. 'You can't replace human connection, human judgment,' warns Sarah Caliboso-Soto, a licensed clinical social worker. Schools are using AI counselors to track students' mental health. As hundreds of schools implement an automated monitoring tool, educators say that students can find talking to a chatbot'more natural' than confiding in a human The alert came around 7pm. Brittani Phillips checked her phone. A middle school counselor in Putnam county, Florida, Phillips receives messages from an artificial intelligence-enabled therapy platform that students use during nonschool hours.
- North America > United States > Florida > Putnam County (0.24)
- North America > United States > California (0.14)
- Oceania > Australia (0.04)
- (3 more...)
- Health & Medicine (1.00)
- Government (1.00)
- Leisure & Entertainment > Sports (0.95)
- Education > Educational Setting > K-12 Education > Middle School (0.68)
- Information Technology > Communications > Social Media (0.97)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.36)