Well File:

Microsoft is bringing Elon Musk's AI models to its cloud

The Japan Times

Microsoft is adding models from Elon Musk's xAI to its artificial intelligence marketplace. Grok 3, which Musk's AI outfit introduced earlier this year, will be available on Microsoft's cloud computing platform, the company said Monday. Microsoft and its biggest rivals in selling rented computing power, including Amazon and Google, are vying to be the place where AI applications are built and deployed. That has made a battleground out of the competition to host the latest models and build sophisticated controls to manage how they're used.


Can AI therapists really be an alternative to human help?

BBC News

Character.ai and other bots such as Chat GPT are based on "large language models" of artificial intelligence. These are trained on vast amounts of data โ€“ whether that's websites, articles, books or blog posts - to predict the next word in a sequence. From here, they predict and generate human-like text and interactions. The way mental health chatbots are created varies, but they can be trained in practices such as cognitive behavioural therapy, which helps users to explore how to reframe their thoughts and actions. They can also adapt to the end user's preferences and feedback.


AI doesn't know 'no' โ€“ and that's a huge problem for medical bots

New Scientist

Toddlers may swiftly master the meaning of the word "no", but many artificial intelligence models struggle to do so. They show a high fail rate when it comes to understanding commands that contain negation words such as "no" and "not". That could mean medical AI models failing to realise that there is a big difference between an X-ray image labelled as showing "signs of pneumonia" and one labelled as showing "no signs of pneumonia" โ€“ with potentially catastrophic consequences if physicians rely on AIโ€ฆ


Can Sam Altman Be Trusted with the Future?

The New Yorker

In 2017, soon after Google researchers invented a new kind of neural network called a transformer, a young OpenAI engineer named Alec Radford began experimenting with it. What made the transformer architecture different from that of existing A.I. systems was that it could ingest and make connections among larger volumes of text, and Radford decided to train his model on a database of seven thousand unpublished English-language books--romance, adventure, speculative tales, the full range of human fantasy and invention. Then, instead of asking the network to translate text, as Google's researchers had done, he prompted it to predict the most probable next word in a sentence. The machine responded: one word, then another, and another--each new term inferred from the patterns buried in those seven thousand books. Radford hadn't given it rules of grammar or a copy of Strunk and White.


Is She Really Mad at Me? Maybe ChatGPT Knows

WIRED

Green was going through a breakup. The reasons for the split itself had been largely unremarkable by breakup standards: Two people, unable to meet each others' needs and struggling to communicate, had decided it was best to part ways. So when Green's ex reached out, unprompted, Green was shocked. The email itself was not notable. Green, a 29-year-old New Yorker, describes it as a typical letter to get after a breakup, an airing of grievances pointing out the ways in which expectations weren't met.


Ducati adds 50 tiny sensors to motorbikes to amp up its racing game

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. MotoGP is high-speed, high-tech motorcycle racing. The fastest riders in the world compete on specialized, purpose-built motorcycles from companies like Ducati, Honda, Yamaha on the world stage in this series, which is considered the most prestigious in the game. Riders reach incredible speeds on their machines up to 220 miles per hour, and races can go 350 turns with gravity-defying leaning that scrapes elbows and knees. This Grand Prix is for the toughest of the tough on the moto circuit.


How to Watch Google I/O 2025 and What to Expect

WIRED

The apple blossoms are sprouting, the sun is finally rising before your alarm goes off, and Google CEO Sundar Pichai is wiping down the lenses of his Gemini-powered smart glasses. You know what that means: It's once again time for Google I/O. Google is going all out for its annual I/O developer conference, which begins on Tuesday, May 20. The event is taking place at Shoreline Amphitheater in Mountain View, California, just down the road from Google's headquarters. The keynote starts at 10 am PDT on Tuesday, and as usual, it will be livestreamed.


AI can be more persuasive than humans in debates, scientists find

The Guardian

Artificial intelligence can do just as well as humans, if not better, when it comes to persuading others in a debate, and not just because it cannot shout, a study has found. Experts say the results are concerning, not least as it has potential implications for election integrity. "If persuasive AI can be deployed at scale, you can imagine armies of bots microtargeting undecided voters, subtly nudging them with tailored political narratives that feel authentic," said Francesco Salvi, the first author of the research from the Swiss Federal Institute of Technology in Lausanne. He added that such influence was hard to trace, even harder to regulate and nearly impossible to debunk in real time. "I would be surprised if malicious actors hadn't already started to use these tools to their advantage to spread misinformation and unfair propaganda," Salvi said.


AI can do a better job of persuading people than we do

MIT Technology Review

Their findings are the latest in a growing body of research demonstrating LLMs' powers of persuasion. The authors warn they show how AI tools can craft sophisticated, persuasive arguments if they have even minimal information about the humans they're interacting with. The research has been published in the journal Nature Human Behavior. "Policymakers and online platforms should seriously consider the threat of coordinated AI-based disinformation campaigns, as we have clearly reached the technological level where it is possible to create a network of LLM-based automated accounts able to strategically nudge public opinion in one direction," says Riccardo Gallotti, an interdisciplinary physicist at Fondazione Bruno Kessler in Italy, who worked on the project. "These bots could be used to disseminate disinformation, and this kind of diffused influence would be very hard to debunk in real time," he says.


Capuchin monkeys kidnap baby howler monkeys, shocking scientists

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. Observing animals, especially other social primates, can be awe-inspiring. Seeing non-human species groom, feed, or socialize with their friends and kin echoes the best of our own impulses. It can feel affirming to know that, in many ways, they're like us. But, like humans, other primates are complicated.