Not enough data to create a plot.
Try a different view from the menu above.
New Scientist
AI doesn't know 'no' – and that's a huge problem for medical bots
Toddlers may swiftly master the meaning of the word "no", but many artificial intelligence models struggle to do so. They show a high fail rate when it comes to understanding commands that contain negation words such as "no" and "not". That could mean medical AI models failing to realise that there is a big difference between an X-ray image labelled as showing "signs of pneumonia" and one labelled as showing "no signs of pneumonia" – with potentially catastrophic consequences if physicians rely on AI…
An interview with Larry Niven – Ringworld author and sci-fi legend
Larry Niven is one of the biggest names in the history of science fiction, and it was a privilege to interview him via Zoom at his home in Los Angeles recently. His 1970 novel Ringworld is the latest pick for the New Scientist Book Club, but he has also written a whole space-fleet-load of novels and short stories over the years, including my favourite sci-fi of all time, A World Out of Time. At 87 years of age, he is very much still writing. I spoke to him about Ringworld, his start in sci-fi, his favourite work over the years, his current projects and whether he thinks humankind will ever leave this solar system. This is an edited version of our conversation.
Are entangled qubits following a quantum Moore's law?
The number of qubits that have been entangled in quantum computers has nearly doubled within the past year – the increase is happening so fast, it seems to be following a "quantum Moore's law". First proposed by Gordon Moore at Intel in 1965, Moore's law states that the power we can get out of a single traditional computer chip doubles at regular intervals; every year at first, then every two years as manufacturing encountered…
When it comes to crime, you can't algorithm your way to safety
The UK government's proposed AI-powered crime prediction tool, designed to flag individuals deemed "high risk" for future violence based on personal data like mental health history and addiction, marks a provocative new frontier. Elsewhere, Argentina's new Artifical Intelligence Unit for Security intends to use machine learning for crime prediction and real-time surveillance. And in some US cities, AI facial recognition is paired with street surveillance to track suspects. The promise of anticipating violence Minority Report-style is compelling.
Who needs Eurovision when we have the Dance Your PhD contest?
Feedback is New Scientist's popular sideways look at the latest science and technology news. You can submit items you believe may amuse readers to Feedback by emailing feedback@newscientist.com Saturday 17 May will see the final of this year's Eurovision Song Contest, which will be the most over-the-top evening of television since, well, the previous Eurovision. Feedback is deeply relieved that Feedback Jr appears not to be interested this year, so we might escape having to sit up and watch the entire thing. While we are deeply supportive of the contest's kind and welcoming vibe, most of the songs make our ears bleed.
AI hallucinations are getting worse – and they're here to stay
AI chatbots from tech companies such as OpenAI and Google have been getting so-called reasoning upgrades over the past months – ideally to make them better at giving us answers we can trust, but recent testing suggests they are sometimes doing worse than previous models. The errors made by chatbots, known as "hallucinations", have been a problem from the start, and it is becoming clear we may never get rid of them. Hallucination is a blanket term for certain kinds of mistakes made by the large language models (LLMs) that power systems like OpenAI's ChatGPT or Google's Gemini. It is best known as a description of the way they sometimes present false information as true. But it can also refer to an AI-generated answer that is factually accurate, but not actually relevant to the question it was asked, or fails to follow instructions in some other way.
Our favourite science fiction books of all time (the ones we forgot)
Is your favourite sci-fi novel included here, or have we forgotten it? Almost exactly a year ago, I asked our team of expert science writers here at New Scientist to name their favourite science fiction novels. Personal tastes meant we ended up with a wonderfully eclectic list, ranging from classics by the likes of Margaret Atwood and Octavia Butler to titles I'd not previously read (Jon Bois's 17776 was a particularly wild suggestion, from our US editor Chelsea Whyte – but it's well worth your time). We New Scientist staffers tend to be sci-fi nerds, and we realised we hadn't quite got all the greats yet. So here, for your reading pleasure, is our second take on our favourite sci-fi novels of all time, otherwise known as the ones we forgot. Again, we're not claiming this is a definitive list. It's just our top sci-fi reads, in no particular order, and we hope you'll discover some new favourites of your own in this line-up. We asked New Scientist staff to pick their favourite science fiction books. Here are the results, ranging from 19th-century classics to modern day offerings, and from Octavia E. Butler to Iain M. Banks And if we still haven't got them all, then come and tell us about it on Facebook.
The maths that tells us when a scientific discovery is real – or not
Terry Pratchett was fond of saying that million-to-one chances crop up nine times out of ten. On the face of it, this sentence is mathematically absurd, but in the fantasy world of Pratchett's Discworld books, powered by the magic of narratives, it makes perfect sense. Of course heroes are always going to face incredible odds, and of course they are almost always going to overcome them, because that is what heroes do.
Concerns raised over AI trained on 57 million NHS medical records
An artificial intelligence model trained on the medical data of 57 million people who have used the National Health Service in England could one day assist doctors in predicting disease or forecast hospitalisation rates, its creators have claimed. However, other researchers say there are still significant privacy and data protection concerns around such large-scale use of health data, while even the AI's architects say they can't guarantee that it won't inadvertently reveal sensitive patient data. The model, called Foresight, was first developed in 2023. That initial version used OpenAI's GPT-3, the large language model (LLM) behind the first version of ChatGPT, and trained on 1.5 million real patient records from two London hospitals. Now, Chris Tomlinson at University College London and his colleagues have scaled up Foresight to create what they say is the world's first "national-scale generative AI model of health data" and the largest of its kind.
US government is using AI for unprecedented social media surveillance
The US government is expanding its surveillance of social media to monitor millions of visitors and immigrants – and its embrace of more data analytics and artificial intelligence tools could increase scrutiny of US citizens as well. "It is nearly – if not entirely – impossible for the government to focus only on non-citizens and not look at anyone else's social media," says Rachel Levinson-Waldman at the Brennan Center for Justice, a public policy non-profit…