scientist call
Musk, scientists call for halt to AI race sparked by ChatGPT
Are tech companies moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans? That's the conclusion of a group of prominent computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak who are calling for a 6-month pause to consider the risks. Their petition published Wednesday is a response to San Francisco startup OpenAI's recent release of GPT-4, a more advanced successor to its widely-used AI chatbot ChatGPT that helped spark a race among tech giants Microsoft and Google to unveil similar applications. The letter warns that AI systems with "human-competitive intelligence can pose profound risks to society and humanity" -- from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction. It says "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control."
- North America > United States > California > San Francisco County > San Francisco (0.26)
- North America > United States > New York (0.06)
- Europe > United Kingdom (0.06)
- Information Technology (1.00)
- Government > Regional Government (0.51)
- Media > News (0.37)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.41)
Musk, scientists call for halt to AI race sparked by ChatGPT - Japan Today
Are tech companies moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans? That's the conclusion of a group of prominent computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak who are calling for a 6-month pause to consider the risks. Their petition published Wednesday is a response to San Francisco startup OpenAI's recent release of GPT-4, a more advanced successor to its widely-used AI chatbot ChatGPT that helped spark a race among tech giants Microsoft and Google to unveil similar applications. The letter warns that AI systems with "human-competitive intelligence can pose profound risks to society and humanity" -- from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction. It says "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control."
- Asia > Japan (0.40)
- North America > United States > California > San Francisco County > San Francisco (0.26)
- North America > United States > New York (0.06)
- Europe > United Kingdom (0.06)
- Information Technology (1.00)
- Government > Regional Government (0.51)
- Media > News (0.36)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.41)
What nuclear winter would really be like - as scientists call for 'urgent' public education
From Threads to The Day After, 'nuclear winter' has been portrayed in science fiction blockbusters for years - but what would the cataclysmic aftermath of a nuclear attack really be like? Smoke from the fires started by nuclear weapons would rise into the atmosphere, blocking out the sun. The resulting perpetual darkness would mean freezing temperatures and crop failure, followed by mass starvation and death. While it sounds very much like a fictional scenario, an expert describes a nuclear winter as a real and'horribly contemporary' risk due to Russia's war on Ukraine. It follows scientific advice of how best to survive a nuclear attack, after Putin has made a series of nuclear threats since the start of Russia's war on Ukraine last year.
- Asia > Russia (0.97)
- North America > United States (0.08)
- Asia > Japan > Honshū > Chūgoku > Hiroshima Prefecture > Hiroshima (0.05)
- (4 more...)
- Government > Military (1.00)
- Government > Regional Government > Europe Government > Russia Government (0.31)
- Government > Regional Government > Asia Government > Russia Government (0.31)
'Quantum Internet' Inches Closer With Advance In Data Teleportation - AI Summary
Researchers believe these devices could one day speed the creation of new medicines, power advances in artificial intelligence and summarily crack the encryption that protects computers vital to national security. In 2019, Google announced that its machine had reached what scientists call "quantum supremacy," which meant it could perform an experimental task that was impossible with traditional computers. Part of the challenge is that a qubit breaks, or "decoheres," if you read information from it -- it becomes an ordinary bit capable of holding only a 0 or a 1 but not both. But by stringing many qubits together and developing ways of guarding against decoherence, scientists hope to build machines that are both powerful and practical. Ultimately, ideally, these would be joined into networks that can send information between nodes, allowing them to be used from anywhere, much as cloud computing services from the likes of Google and Amazon make processing power widely accessible today.
Simple cell-like robots join together in large groups to transport objects
Scientists have succeeded in creating simple cell-like robots that join together in large groups, move in a coordinated fashion and transport objects. They are able to coordinate their movements, transport objects and even respond to light. Scientists call them'particle robots' but even their creators admit they share similarities with the'grey goo' that prompted a famous warning from the Prince of Wales. Grey goo is a hypothetical end-of-the-world scenario involving molecular nanotechnology in which self-replicating robots consume all biomass on Earth. Scientists have succeeded in creating simple cell-like robots that join together in large groups, move in a coordinated fashion and transport objects.
- North America > United States > New York (0.05)
- North America > United States > Massachusetts (0.05)
Scientists call for rules on evaluating predictive artificial intelligence in medicine
The FDA tells Axios it is working on developing a framework to handle advances in AI and medicine, as pointed out by Commissioner Scott Gottlieb last year. Meanwhile, Ravi B. Parikh, co-author of the paper and a fellow at University of Pennsylvania's School of Medicine, tells Axios that the FDA needs to set standards to evaluate the "staggering" pace of AI development. Why it matters: Advanced algorithms present both opportunities and challenges, says Amol S. Navathe, co-author and assistant professor at Penn's School of Medicine. Details: The authors list the following as recommended standards... Outside comment: Eric Topol, founder and director of Scripps Research Translational Institute, who was not part of this paper, says the timing of these proposed standards is "very smart" before advanced algorithms are placed into too many devices. What's next: The scientists hope the FDA considers integrating the proposed standards alongside its current pre-certification program under the Digital Health Innovation Act to study clinical outcomes of AI-based tools, Ravi says.
Scientists call for a ban on AI-controlled killer robots
"We are not talking about walking, talking terminator robots that are about to take over the world; what we are concerned about is much more imminent: conventional weapons systems with autonomy," Human Right's Watch advocacy director Mary Wareham told the BBC. Another big question that arises: who is responsible when a machine does decide to take a human life? Is it the person who made the machine? "The delegation of authority to kill to a machine is not justified and a violation of human rights because machines are not moral agents and so cannot be responsible for making decisions of life and death," associate professor from the New School in New York Peter Asaro told the BBC. But not everybody is on board to fully denounce the use of AI-controlled weapon systems.
- North America > United States > New York (0.30)
- Europe > Russia (0.10)
- Asia > Russia (0.10)
The Future of Learning – Member Feature Stories – Medium
Can I be a professional translator without any credentials? If I want to be a published writer, should I still ghostwrite for money? Do summaries of existing book summaries make any sense? The seemingly obvious answer to them all is "no," yet I did all those things anyway. And while some led nowhere, others now pay my bills.
- North America > United States (0.05)
- Europe (0.05)
- Health & Medicine (0.72)
- Education > Educational Setting (0.48)
Scientists Call Out Ethical Concerns for the Future of Neurotechnology
For some die-hard tech evangelists, using neural interfaces to merge with AI is the inevitable next step in humankind's evolution. But a group of 27 neuroscientists, ethicists, and machine learning experts have highlighted the myriad ethical pitfalls that could be waiting. To be clear, it's not just futurologists banking on the convergence of these emerging technologies. The Morningside Group estimates that private spending on neurotechnology is in the region of $100 million a year and growing fast, while in the US alone public funding since 2013 has passed the $500 million mark. The group is made up of representatives from international brain research projects, tech companies like Google and neural interface startup Kernel, and academics from the US, Canada, Europe, Israel, China, Japan, and Australia.
- North America > United States (0.55)
- Oceania > Australia (0.25)
- North America > Canada (0.25)
- (4 more...)
- Information Technology (0.50)
- Health & Medicine > Therapeutic Area (0.35)
Scientists call for ban on killer robots in Geneva today
AI experts have put together a seven-minute film that depicts a terrifying future where tiny killer drones are programmed to carry out mass killings. Made by an advocacy group called Campaign to Stop Killer Robots, the footage shows palm-sized drones armed with explosives finding and attacking people without human supervision. These tiny drones can kill with ruthless efficiency and campaigners warn a preemptive ban on the technology is needed to stop a new era of horrific mass destruction. In the film, machines can can spot activists in lecture halls and kill them by propelling an explosive into their head. The video starts with a developer introducing the new technology, saying these drones can react 100 times faster than a human. He says these drones have wide field cameras, face recognition, special sensors and shaped explosives.
- North America > United States > California (0.05)
- Europe > Russia (0.05)
- Asia > Russia (0.05)
- Government > Military (1.00)
- Information Technology (0.97)
- Media (0.78)