bioweapon
Should we worry AI will create deadly bioweapons? Not yet, but one day
Should we worry AI will create deadly bioweapons? Artificial intelligence promises to transform biology, allowing us to design better drugs, vaccines and even synthetic organisms for, say, eating waste plastic. But some fear it could also be used for darker purposes, to create bioweapons that wouldn't be detected by conventional methods until it was too late. So, how worried should we be? "AI advances are fuelling breakthroughs in biology and medicine," says Eric Horvitz, chief scientific officer at Microsoft. "With new power comes responsibility for vigilance." His team has published a study looking at whether AI could design proteins that do the same thing as proteins that are known to be dangerous, but are different enough that they wouldn't be recognised as dangerous.
- North America > United States > California (0.05)
- Asia > Japan (0.05)
- Antarctica (0.05)
We Need to be Ready for Biotech's ChatGPT Moment
Imagine a world where everything from plastics to concrete is produced from biomass. Personalized cell and gene therapies prevent pandemics and treat previously incurable genetic diseases. Meat is lab-grown; enhanced nutrient grains are climate-resistant. This is what the future could look like in the years ahead. The next big game-changing revolution is in biology.
When science fiction becomes reality: Experts reveal the most realistic APOCALPYSE movies - so, does your favourite blockbuster give us a glimpse at how the world will end?
From The Terminator to The Day After Tomorrow, movies have envisioned just about every possibility for how the world might end. If you're a science fiction movie buff, you might think that some of these apocalyptic scenarios seem a little far-fetched. But hold onto your popcorn, as experts say that some of these disastrous plotlines could actually become a reality. While we don't need to worry about an asteroid wiping us out like in Armageddon, experts warn that a bioweapon leak like 12 Monkeys could really end the world. And if your favourite blockbuster does give us a glimpse at how the world will end, not even Bruce Willis will be able to save us. Apocalypse movies find their inspiration in a number of different disasters, but which are the most realistic. An escaped bioweapon could pose a genuine threat of destroying humanity.
- Europe > Russia (0.15)
- Asia > Russia (0.15)
- Europe > United Kingdom (0.14)
- (3 more...)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Government > Military (1.00)
- Health & Medicine > Therapeutic Area > Immunology (0.69)
- (2 more...)
ChatGPT is 'mildly' useful in making bioweapons: OpenAI study finds chatbot may increase accuracy and completeness of tasks for planning deadly attacks
Lawmakers and scientists have warned ChatGPT could help anyone develop deadly bioweapons that would wreck havoc on the world. While studies have suggested it is possible, new research from the chatbot's creator OpenAI claims GPT-4 - the lasted version -provides at most a mild uplift in biological threat creation accuracy. OpenAI conducted a study of 100 human participants who were separated into groups - one used the AI to craft a biotattack and the other just the internet. The study found that'GPT-4 may increase experts' ability to access information about biological threats, particularly for accuracy and completeness of tasks,' according to OpenAI's report. Results showed that the LLM group was able to obtain more information about bioweapons than the internet only group for ideation and acquisition, but more information is needed to accurately identify any potential risks.
- Government (0.74)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.31)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Artificial Armageddon? The 5 worst case scenarios for AI, revealed - from Terminator-like killer robots to helping terrorists develop deadly bioweapons
A grave warning over the dangers of artificial intelligence (AI) to humans has come from Prime Minister Rishi Sunak today. While acknowledging the positive potential of the technology in areas such as healthcare, the PM said'humanity could lose control of AI completely' with'incredibly serious' consequences. The grave message coincides with the publication of a government report and comes ahead of the world's first AI Safety Summit in Buckinghamshire next week. Many of the world's top scientists attending the event think that in the near future, the technology could even be used to kill us. Here are the five ways humans could be eliminated by AI, from the development of novel bioweapons to autonomous cars and killer robots. Largely due to movies like The Terminator, a common doomsday scenario in popular culture depicts our demise at the hands of killer robots.
- North America > United States > Massachusetts (0.04)
- North America > United States > California > Santa Clara County > Mountain View (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- (4 more...)
- Health & Medicine (1.00)
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.91)
- Law Enforcement & Public Safety (0.85)
Britain's Big AI Summit Is a Doom-Obsessed Mess
The UK government, with its reversals on climate policy and commitment to oil drilling and air pollution, usually seems to be pro-apocalypse. But lately, senior British politicians have been on a save-the-world tour. Prime minister Rishi Sunak, his ministers, and diplomats have been briefing their international counterparts about the existential dangers of runaway artificial superintelligence, which, they warn, could engineer bioweapons, empower autocrats, undermine democracy, and threaten the financial system. "I do not believe we can hold back the tide," deputy prime minister Oliver Dowden told the United Nations in late September. Dowden's doomerism is supposed to drum up support for the UK government's global summit on AI governance, scheduled for November 1 and 2. The event is being billed as the moment that the tide turns on the specter of killer AI, a chance to start building international consensus toward mitigating that risk.
- North America > United States > California (0.06)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.06)
Tech experts outline the four ways AI could spiral into worldwide catastrophes
Center for A.I. Safety Director Dan Hendrycks explains concerns about how the rapid growth of artificial intelligence could impact society. Tech experts, Silicon Valley billionaires and everyday Americans have voiced their concerns that artificial intelligence could spiral out of control and lead to the downfall of humanity. Now, researchers at the Center for AI Safety have detailed exactly what "catastrophic" risks AI poses to the world. "The world as we know it is not normal," researchers with the Center for AI Safety (CAIS) wrote in a recent paper titled "An Overview of Catastrophic AI Risks." "We take for granted that we can talk instantaneously with people thousands of miles away, fly to the other side of the world in less than a day, and access vast mountains of accumulated knowledge on devices we carry around in our pockets."
- North America > United States > California (0.25)
- Europe > Ukraine > Kyiv Oblast > Chernobyl (0.05)
- Europe > Russia (0.05)
- (3 more...)
- Government > Military (0.96)
- Information Technology (0.89)
Intelligence Committee members warn US of bioweapons targeting DNA of individual Americans
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. A member of the House Intelligence Committee warned Americans to stay away from DNA testing services as the information could be used to develop bioweapons targeting specific groups of Americans or even individuals. Rep. Jason Crow, D-Colo., made the comments during an appearance at the Aspen Security Forum in Colorado on Friday, saying many Americans are far too willing to give up their DNA information to private companies. "You can't have a discussion about this without talking about privacy and the protection of commercial data because expectations of privacy have degraded over the last 20 years," Crow said "Young folks actually have very little expectation of privacy, that's what the polling and the data show."
- North America > United States > Colorado (0.26)
- Europe > Russia (0.10)
- Asia > Russia (0.10)
- (5 more...)
- Government > Regional Government > North America Government > United States Government (0.88)
- Government > Military (0.70)
Physicist Max Tegmark on the promise and pitfalls of artificial intelligence
To describe Max Tegmark's career as "storied" is to do the Swedish-American physicist a disservice. He's published more than 200 publications and developed data analysis tools for microwave background experiments. And he's been elected as a Fellow of the American Physical Society for his contributions to cosmology. In 2015, Elon Musk donated $10 million to FLI to advance research into the ethical, legal, and economic effects of AI systems. Tegmark's latest book, Life 3.0: Being Human in the Age of Artificial Intelligence, postulates that neural networks of the future may be able to redesign their own hardware and internal structure.
- North America > United States > California (0.14)
- Europe > Russia (0.14)
- Asia > Russia (0.14)
- (9 more...)
- Personal > Interview (0.48)
- Summary/Review (0.34)
- Government > Military (0.95)
- Law Enforcement & Public Safety (0.71)
- Government > Regional Government > North America Government > United States Government (0.47)
Should AI researchers kill people?
AI research is increasingly being used by militaries around the world for offensive and defensive applications. This past week, groups of AI researchers began to fight back against two separate programs located halfway around the world from each other, generating tough questions about just how much engineers can affect the future uses of these technologies. From Silicon Valley, the New York Times published an internal protest memo written by several thousand Google employees, which vociferously opposed Google's work on a Defense Department-led initiative called Project Maven, which aims to use computer vision algorithms to analyze vast troves of image and video data. As the department's news service quoted Marine Corps Col. Drew Cukor last year about the initiative: "You don't buy AI like you buy ammunition," he added. "There's a deliberate workflow process and what the department has given us with its rapid acquisition authorities is an opportunity for about 36 months to explore what is governmental and [how] best to engage industry [to] advantage the taxpayer and the warfighter, who wants the best algorithms that exist to augment and complement the work he does."
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.35)