Goto

Collaborating Authors

 expert fear


Trump's tax bill seeks to prevent AI regulations. Experts fear a heavy toll on the planet

The Guardian

US Republicans are pushing to pass a major spending bill that includes provisions to prevent states from enacting regulations on artificial intelligence. Such untamed growth in AI will take a heavy toll upon the world's dangerously overheating climate, experts have warned. About 1bn tons of planet-heating carbon dioxide are set to be emitted in the US just from AI over the next decade if no restraints are placed on the industry's enormous electricity consumption, according to estimates by researchers at Harvard University and provided to the Guardian. This 10-year timeframe, a period of time in which Republicans want a "pause" of state-level regulations upon AI, will see so much electricity use in data centers for AI purposes that the US will add more greenhouse gases to the atmosphere than Japan does annually, or three times the yearly total from the UK. The exact amount of emissions will depend on power plant efficiency and how much clean energy will be used in the coming years, but the blocking of regulations will also be a factor, said Gianluca Guidi, visiting scholar at the Harvard TH Chan School of Public Health.


AI could spark nuclear Armageddon and World War Three, experts fear

Daily Mail - Science & tech

Artificial intelligence could spark an accidental nuclear war, conflict experts fear. The Stockholm International Peace Research Institute (SIPRI), the world's leading organisation on nuclear assessments, said technologies like AI are aggravating the risk carried with growing global nuclear stockpiles. SIPRI pointed to China's rapidly growing stockpile, from 500 to 600 in a single year, as well as the imminent expiry of the final arms control treaty between the US and Russia, two nuclear-armed nations. The institute's director, Dan Smith, warned: 'One component of the coming arms race will be the attempt to gain and maintain a competitive edge in artificial intelligence (AI), both for offensive and defensive purposes. 'There are benefits to be found but the careless adoption of AI could significantly increase nuclear risk.'


Real-life Inception headband lets you control your dreams - but experts fear zapping the brain with 2,000 device could hinder cognitive abilities during waking hours

Daily Mail - Science & tech

An AI tech startup wants you to trade in regular dreams for a headband that lets you control your nighttime wanderings in a lucid dreamlike state. Prophetic is releasing the 2,000 Halo AI headband in 2025, which will give wearers unparalleled control over their dreams that could help users grapple with existing problems they're facing in their waking lives. The headband uses electroencephalography (EEG), which records electrical activity in the brain, and functional magnetic resonance imaging (fMRI) which measures brain activity by measuring the blood flow. However, experts aren't yet sure what the long-term effects could be and warn that using high-frequency sounds to zap your brain, could hinder our cognitive ability to process short-term memories. 'We are very rarely lucid in our dreams.


Apple to Scan Every Device for Child Abuse Content -- But Experts Fear for Privacy

#artificialintelligence

Apple on Thursday said it's introducing new child safety features in iOS, iPadOS, watchOS, and macOS as part of its efforts to limit the spread of Child Sexual Abuse Material (CSAM) in the U.S. To that effect, the iPhone maker said it intends to begin client-side scanning of images shared via every Apple device for known child abuse content as they are being uploaded into iCloud Photos, in addition to leveraging on-device machine learning to vet all iMessage images sent or received by minor accounts (aged under 13) to warn parents of sexually explicit photos shared over the messaging platform. Furthermore, Apple also plans to update Siri and Search to stage an intervention when users try to perform searches for CSAM-related topics, alerting that the "interest in this topic is harmful and problematic." "Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit," Apple noted. "The feature is designed so that Apple does not get access to the messages." The feature, called Communication Safety, is said to be an opt-in setting that must be enabled by parents through the Family Sharing feature. Detection of known CSAM images involves carrying out on-device matching using a database of known CSAM image hashes provided by the National Center for Missing and Exploited Children (NCMEC) and other child safety organizations before the photos are uploaded to the cloud.