toxin
Were there any venomous dinosaurs?
Were there any venomous dinosaurs? There's been speculation, but no solid proof. Breakthroughs, discoveries, and DIY tips sent every weekday. It's one of the most memorable scenes in the original movie: the dinosaur spreads the frill around its neck and sprays deadly venom from its jaws. The frill (inspired by Australia's frilled lizard) is pure Hollywood fantasy.
- Oceania > Australia (0.25)
- Oceania > New Zealand (0.05)
- North America > United States > Virginia (0.05)
- Asia > China (0.05)
Can Large Language Models Design Biological Weapons? Evaluating Moremi Bio
Hattoh, Gertrude, Ayensu, Jeremiah, Ofori, Nyarko Prince, Eshun, Solomon, Akogo, Darlington
Advances in AI, particularly LLMs, have dramatically shortened drug discovery cycles by up to 40% and improved molecular target identification. However, these innovations also raise dual-use concerns by enabling the design of toxic compounds. Prompting Moremi Bio Agent without the safety guardrails to specifically design novel toxic substances, our study generated 1020 novel toxic proteins and 5,000 toxic small molecules. In-depth computational toxicity assessments revealed that all the proteins scored high in toxicity, with several closely matching known toxins such as ricin, diphtheria toxin, and disintegrin-based snake venom proteins. Some of these novel agents showed similarities with other several known toxic agents including disintegrin eristostatin, metalloproteinase, disintegrin triflavin, snake venom metalloproteinase, corynebacterium ulcerans toxin. Through quantitative risk assessments and scenario analyses, we identify dual-use capabilities in current LLM-enabled biodesign pipelines and propose multi-layered mitigation strategies. The findings from this toxicity assessment challenge claims that large language models (LLMs) are incapable of designing bioweapons. This reinforces concerns about the potential misuse of LLMs in biodesign, posing a significant threat to research and development (R&D). The accessibility of such technology to individuals with limited technical expertise raises serious biosecurity risks. Our findings underscore the critical need for robust governance and technical safeguards to balance rapid biotechnological innovation with biosecurity imperatives.
Scientists use AI to create completely new anti-venom proteins
Each year, snake bites kill upwards of 100,000 people and permanently disable hundreds of thousands more, according to estimates from the World Health Organization. Promising new science, enabled by state-of-the-art technology, could help quell the threat. Researchers have successfully designed two proteins to neutralize some of the most lethal venom toxins, using a suite of artificial intelligence tools, per a study published January 15 in the journal Nature. These "de novo" proteins–molecules not found anywhere in nature–protected 100% of mice from certain death when mixed with the deadly snake compounds and administered in lab experiments. "I think we could revolutionize the treatment [of snake bites]," says Susana Vázquez Torres, lead study author and a biochemist who completed this research as part of her doctoral thesis in David Baker's lab at the University of Washington.
Artificial Liver Classifier: A New Alternative to Conventional Machine Learning Models
Jumaah, Mahmood A., Ali, Yossra H., Rashid, Tarik A.
Supervised machine learning classifiers often encounter challenges related to performance, accuracy, and overfitting. This paper introduces the Artificial Liver Classifier (ALC), a novel supervised learning classifier inspired by the human liver's detoxification function. The ALC is characterized by its simplicity, speed, hyperparameters-free, ability to reduce overfitting, and effectiveness in addressing multi-classification problems through straightforward mathematical operations. To optimize the ALC's parameters, an improved FOX optimization algorithm (IFOX) is employed as the training method. The proposed ALC was evaluated on five benchmark machine learning datasets: Iris Flower, Breast Cancer Wisconsin, Wine, Voice Gender, and MNIST. The results demonstrated competitive performance, with the ALC achieving 100% accuracy on the Iris dataset, surpassing logistic regression, multilayer perceptron, and support vector machine. Similarly, on the Breast Cancer dataset, it achieved 99.12% accuracy, outperforming XGBoost and logistic regression. Across all datasets, the ALC consistently exhibited lower overfitting gaps and loss compared to conventional classifiers. These findings highlight the potential of leveraging biological process simulations to develop efficient machine learning models and open new avenues for innovation in the field.
- North America > United States > Wisconsin (0.25)
- Europe > United Kingdom (0.04)
- Asia > Singapore (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Support Vector Machines (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Perceptrons (0.68)
Transformer-based toxin-protein interaction analysis prioritizes airborne particulate matter components with potential adverse health effects
Zhu, Yan, Wang, Shihao, Han, Yong, Lu, Yao, Qiu, Shulan, Jin, Ling, Li, Xiangdong, Zhang, Weixiong
Air pollution, particularly airborne particulate matter (PM), poses a significant threat to public health globally. It is crucial to comprehend the association between PM-associated toxic components and their cellular targets in humans to understand the mechanisms by which air pollution impacts health and to establish causal relationships between air pollution and public health consequences. Current methods for modeling and analyzing these interactions are rudimentary, with experimental approaches offering limited throughput and comprehensiveness. Leveraging cutting-edge deep learning technologies, we developed tipFormer (toxin-protein interaction prediction based on transformer), a novel machine-learning approach for identifying toxic components capable of penetrating human cells and instigating pathogenic biological activities and signaling cascades. It incorporates dual pre-trained language models to derive encodings for protein sequences and chemicals. It employs a convolutional encoder to assimilate the sequential attributes of proteins and chemicals. It then introduces a novel learning module with a cross-attention mechanism to decode and elucidate the multifaceted interactions pivotal for the hotspots binding proteins and chemicals. Through thorough experimentation, tipFormer was shown to be proficient in capturing interactions between proteins and toxic components. This approach offers significant value to the air quality and toxicology research communities by enabling high-throughput, high-content identification and prioritization of hazards. Keywords: Air pollution, toxin-protein interaction, computational modeling, attention mechanisms 1. Introduction Air pollution has emerged as a critical global health concern, primarily driven by rapid economic, industrial and population growth and further exacerbated by climate change and other non-anthropogenic factors [1]. The World Health Organization estimates that approximately 7 million premature deaths occur every year due to air pollution exposure. The consequences of air pollution extend far beyond individual health implications and exacerbate the strain on societal and healthcare systems in numerous ways [2]. The health risks associated with airborne particulate matter (PM) are particularly concerning for public health [3].
- Asia > China > Hong Kong (0.06)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- (3 more...)
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.49)
- Materials > Chemicals (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Public Health (1.00)
- (5 more...)
Backdoor Attack on Multilingual Machine Translation
Wang, Jun, Xu, Qiongkai, He, Xuanli, Rubinstein, Benjamin I. P., Cohn, Trevor
While multilingual machine translation (MNMT) systems hold substantial promise, they also have security vulnerabilities. Our research highlights that MNMT systems can be susceptible to a particularly devious style of backdoor attack, whereby an attacker injects poisoned data into a low-resource language pair to cause malicious translations in other languages, including high-resource languages. Our experimental results reveal that injecting less than 0.01% poisoned data into a low-resource language pair can achieve an average 20% attack success rate in attacking high-resource language pairs. This type of attack is of particular concern, given the larger attack surface of languages inherent to low-resource settings. Our aim is to bring attention to these vulnerabilities within MNMT systems with the hope of encouraging the community to address security concerns in machine translation, especially in the context of low-resource languages.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (12 more...)
Hybrid Machine Learning techniques in the management of harmful algal blooms impact
Molares-Ulloa, Andres, Rivero, Daniel, Ruiz, Jesus Gil, Fernandez-Blanco, Enrique, de-la-Fuente-Valentín, Luis
Harmful algal blooms (HABs) are episodes of high concentrations of algae that are potentially toxic for human consumption. Mollusc farming can be affected by HABs because, as filter feeders, they can accumulate high concentrations of marine biotoxins in their tissues. To avoid the risk to human consumption, harvesting is prohibited when toxicity is detected. At present, the closure of production areas is based on expert knowledge and the existence of a predictive model would help when conditions are complex and sampling is not possible. Although the concentration of toxin in meat is the method most commonly used by experts in the control of shellfish production areas, it is rarely used as a target by automatic prediction models. This is largely due to the irregularity of the data due to the established sampling programs. As an alternative, the activity status of production areas has been proposed as a target variable based on whether mollusc meat has a toxicity level below or above the legal limit. This new option is the most similar to the actual functioning of the control of shellfish production areas. For this purpose, we have made a comparison between hybrid machine learning models like Neural-Network-Adding Bootstrap (BAGNET) and Discriminative Nearest Neighbor Classification (SVM-KNN) when estimating the state of production areas. The study has been carried out in several estuaries with different levels of complexity in the episodes of algal blooms to demonstrate the generalization capacity of the models in bloom detection. As a result, we could observe that, with an average recall value of 93.41% and without dropping below 90% in any of the estuaries, BAGNET outperforms the other models both in terms of results and robustness.
- Europe > Spain > Galicia > A Coruña Province > A Coruña (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (11 more...)
Protecting Society from AI Misuse: When are Restrictions on Capabilities Warranted?
Anderljung, Markus, Hazell, Julian
Artificial intelligence (AI) systems will increasingly be used to cause harm as they grow more capable. In fact, AI systems are already starting to be used to automate fraudulent activities, violate human rights, create harmful fake images, and identify dangerous toxins. To prevent some misuses of AI, we argue that targeted interventions on certain capabilities will be warranted. These restrictions may include controlling who can access certain types of AI models, what they can be used for, whether outputs are filtered or can be traced back to their user, and the resources needed to develop them. We also contend that some restrictions on non-AI capabilities needed to cause harm will be required. Though capability restrictions risk reducing use more than misuse (facing an unfavorable Misuse-Use Tradeoff), we argue that interventions on capabilities are warranted when other interventions are insufficient, the potential harm from misuse is high, and there are targeted ways to intervene on capabilities. We provide a taxonomy of interventions that can reduce AI misuse, focusing on the specific steps required for a misuse to cause harm (the Misuse Chain), and a framework to determine if an intervention is warranted. We apply this reasoning to three examples: predicting novel toxins, creating harmful images, and automating spear phishing campaigns.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Ukraine (0.04)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.04)
- (6 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.95)
- Information Technology > Artificial Intelligence > Robots (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.47)
AI drug research algorithm flipped to invent 40,000 biochemical weapons
We often hear about the benefits artificial intelligence (AI) can bring to medicine and healthcare through drug research, but could it also pose a threat? Researchers from Collaborations Pharmaceuticals, a North Carolina-based drug discovery company, have published a paper that highlights the dangerous potential of AI and machine learning to discover biochemical weapons. By simply tweaking a machine learning model called MegaSyn to reward instead of penalise predicted toxicity, their AI was able to generate 40,000 biochemical weapons in six hours. Worryingly, the researchers admitted to never having considered the risks of misuse involved in designing molecules. "The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery. We have spent decades using computers and AI to improve human health--not to degrade it," the paper noted.
- North America > United States > North Carolina (0.26)
- North America > Canada > Ontario > Middlesex County > London (0.06)
- Europe > Netherlands > North Holland > Amsterdam (0.06)
Mysterious brain disease 'cluster' under investigation in Canada
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Officials in Canada are racing to find the cause of a mysterious brain disease that has afflicted more than 40 people in the New Brunswick province, according to news reports. Symptoms of the mystery illness resemble those of Creutzfeldt-Jakob disease (CJD), a rare and fatal brain disorder; and include memory loss, hallucinations and muscle atrophy, according to The Guardian. Earlier this month, Canadian officials alerted doctors in the New Brunswick area that they were monitoring a cluster of 43 cases of neurological disease of unknown cause, The Guardian reported.