Goto

Collaborating Authors

 percent accuracy


Can dogs sense ghosts?

Popular Science

With senses far sharper than ours, dogs detect what we can't--perhaps more than we realize. Dogs have an extraordinary sense of smell and hearing. Is it enough to perceive things from the great beyond? Breakthroughs, discoveries, and DIY tips sent every weekday. M arc Eaton, a professor of sociology at Ripon College remembers talking to a paranormal investigator who had recently lost his father.


This sticker reads emotions (even the ones you try to hide)

Popular Science

Good luck hiding how you feel. Researchers from Penn State University believe they have developed a stretchy, Band-Aid-sized wearable device capable of decoding even the most advanced poker face. The device attaches to a subject's skin and uses sensors to independently detect physiological responses, such as skin temperature and perspiration, in real time. That data is then digitized and analyzed by an AI model designed to determine the type of emotional responses the wearer is experiencing. In testing, the device was able to accurately identify the correct emotional response 89 percent of the time--significantly more accurate, the researchers say, than simply observing a person's facial expression.


Apple Engineers Show How Flimsy AI 'Reasoning' Can Be

WIRED

For a while now, companies like OpenAI and Google have been touting advanced "reasoning" capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical "reasoning" displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems. The fragility highlighted in these new results helps support previous research suggesting that LLMs' use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. "Current LLMs are not capable of genuine logical reasoning," the researchers hypothesize based on these results. "Instead, they attempt to replicate the reasoning steps observed in their training data."


AI tongue scanner can diagnose illnesses with 96 percent accuracy

Popular Science

A new artificial intelligence machine learning model is capable of accurately diagnosing certain illnesses nearly every time by simply looking at a patient's tongue. The novel technology, while state-of-the-art, draws its inspiration from medical approaches utilized by humans for over 2,000 years. When it comes to diagnosing ailments, traditional Chinese medicine and other practices often turn to the tongue for clues. Based on its color, shape, and thickness, the muscle can reveal a number of possible health issues--from cancer, to diabetes, to even asthma and gastrointestinal issues. Now, after more than two millennia of peering into patient mouths for answers, doctors may soon receive a second opinion from artificial eyes powered by machine learning.


New tool predicts Mount St Helens eruptions with 95% accuracy - as America's most dangerous volcano is recharging

Daily Mail - Science & tech

A new technique that analyzes seismic signals to predict days in advance when America's most dangerous volcano will erupt. Mount St Helens, located in Washington State, has recently showed signs of recharging and scientists have developed a machine learning tool to find patterns of volcanic activity to provide better emergency plans. The system was able to determine when the volcano experienced unrest, pre-eruptive and eruptive periods. Using the data, the technology predicted at least three days in advance when the volcano would erupt - with 95 percent accuracy. The study comes less than 10 days since the Pacific Northwest Seismic Network revealed it detected with 350 earthquakes in the region since February, which are signs the volcano may be awakening.


Four of these faces were produced entirely by AI... can YOU tell who's real? Nearly 40% of people got it wrong in new study

Daily Mail - Science & tech

Recognizing the difference between a real photo and an AI-generated image is becoming increasingly difficult as the deepfake technology becomes more realistic. Researchers at the University of Waterloo in Canada set out to determine whether people can distinguish AI images from real ones. They asked 260 participants to label 10 images gathered by a Google search and 10 images generated by Stable Diffusion or DALL-E – two AI programs used to create deepfake images – as real or fake. The researchers noted that they expected 85 percent of participants to be able to accurately identify the images, but only 61 percent of people guessed correctly. The study, published in Springer Link, found that the most common reasons people identified the images as real or fake were by looking at details like the eyes and hair while other, more generalized reasons, were that the picture'looked weird.' Participants were allowed to look at the pictures for an unlimited amount of time and focus on the little details, something they most likely wouldn't do if they were just scrolling online – also known as'doomscrolling.'


Researchers fuse lab-grown human brain tissue with electronics

Engadget

In a story ripped from the opening scenes of a sci-fi horror movie, scientists have bridged a critical gap between the biological and electronic. The study, published in Nature Electronics (summarized in Nature), details a "hybrid biocomputer" combining lab-grown human brain tissue with conventional circuits and AI. Dubbed Brainoware, the system learned to identify voices with 78 percent accuracy. It could one day lead to silicon microchips fused with neurons. Brainoware combines brain organoids -- stem-cell-derived clusters of human cells morphed into neuron-filled "mini-brains" -- with conventional electronic circuits.


Welcome to CAPTCHA Hell

The Atlantic - Technology

Some days, I wonder if I'm a bot. The problem is CAPTCHAs, those little online challenges that websites require you to pass to prove that you're a human. When one pops up on my screen, I tend to spend way too much time looking at the grid of nine images and clicking those with a traffic light, or a crosswalk, or a bike … only to miss the one in the bottom-right corner that just barely looks like a bike. Lately, I've had to rotate a 3-D bird to face the same direction a hand is pointing, which should be easy but somehow isn't. CAPTCHA stands for "Completely Automated Public Turing test to tell Computers and Humans Apart," so if I'm flubbing them constantly, then I'm clearly a computer (my wife, house, and cat must all be implanted memories).


Watch the moment a computer reads a patient's MIND

Daily Mail - Science & tech

It's probably a good idea to keep your opinions to yourself if your friend gets a terrible new haircut - but soon you might not get a choice. That's because scientists at the University of Texas at Austin have trained an artificial intelligence (AI) to read a person's mind and turn their innermost thoughts into text. Three study participants listened to stories while lying in an MRI machine, while an AI'decoder' analysed their brain activity. They were then asked to read a different story or make up their own, and the decoder could then turn the MRI data into text in real time. The breakthrough raises concerns about'mental privacy' as it could be the first step in being able to eavesdrop on others' thoughts.


Hitting the Books: Who's excited to have their brainwaves scanned as a personal ID?

Engadget

All of those fantastical possibilities promised by burgeoning brain-computer interface technology come with the unavoidable cost of needing its potentially hackable wetware to ride shotgun in your skull. Given how often our personal data is already mishandled online, do we really want to trust the Tech Bros of Silicon Valley with our most personal of biometrics, our brainwaves? In her new book, The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology, Robinson O. Everett Professor of Law at Duke University, Nita A. Farahany, examines the legal, ethical, and moral threats that tomorrow's neurotechnologies could pose. From The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology by Nita A. Farahany. Assume that Meta, Google, Microsoft, and other big tech companies soon have their way, and neural interface devices replace keyboards and mice.