fli
AI firms 'unprepared' for dangers of building human-level systems, report warns
Artificial intelligence companies are "fundamentally unprepared" for the consequences of creating systems with human-level intellectual performance, according to a leading AI safety group. The Future of Life Institute (FLI) said none of the firms on its AI safety index scored higher than a D for "existential safety planning". One of the five reviewers of the FLI's report said that, despite aiming to develop artificial general intelligence (AGI), none of the companies scrutinised had "anything like a coherent, actionable plan" to ensure the systems remained safe and controllable. AGI refers to a theoretical stage of AI development at which a system is capable of matching a human in carrying out any intellectual task. OpenAI, the developer of ChatGPT, has said its mission is to ensure AGI "benefits all of humanity".
- North America > United States > New York (0.06)
- North America > United States > Massachusetts (0.06)
- Asia > China (0.06)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.31)
A Multi-module Robust Method for Transient Stability Assessment against False Label Injection Cyberattacks
Wang, Hanxuan, Lu, Na, Liu, Yinhong, Wang, Zhuqing, Wang, Zixuan
The success of deep learning in transient stability assessment (TSA) heavily relies on high-quality training data. However, the label information in TSA datasets is vulnerable to contamination through false label injection (FLI) cyberattacks, resulting in degraded performance of deep TSA models. To address this challenge, a Multi-Module Robust TSA method (MMR) is proposed to rectify the supervised training process misguided by FLI in an unsupervised manner. In MMR, a supervised classification module and an unsupervised clustering module are alternatively trained to improve the clustering friendliness of representation leaning, thereby achieving accurate clustering assignments. Leveraging the clustering assignments, we construct a training label corrector to rectify the injected false labels and progressively enhance robustness and resilience against FLI. However, there is still a gap on accuracy and convergence speed between MMR and FLI-free deep TSA models. To narrow this gap, we further propose a human-in-the-loop training strategy, named MMR-HIL. In MMR-HIL, potential false samples can be detected by modeling the training loss with a Gaussian distribution. From these samples, the most likely false samples and most ambiguous samples are re-labeled by a TSA experts guided bi-directional annotator and then subjected to penalized optimization, aimed at improving accuracy and convergence speed. Extensive experiments indicate that MMR and MMR-HIL both exhibit powerful robustness against FLI in TSA performance. Moreover, the contaminated labels can also be effectively corrected, demonstrating superior resilience of the proposed methods.
- Information Technology > Security & Privacy (1.00)
- Energy > Power Industry (0.92)
- Government > Military > Cyberwarfare (0.84)
Stop the killer robots! Musk-backed lobbyists fight to save Europe from bad AI – POLITICO
A lobby group backed by Elon Musk and associated with a controversial ideology popular among tech billionaires is fighting to prevent killer robots from terminating humanity, and it's taken hold of Europe's Artificial Intelligence Act to do so. The Future of Life Institute (FLI) has over the past year made itself a force of influence on some of the AI Act's most contentious elements. Despite the group's links to Silicon Valley, Big Tech giants like Google and Microsoft have found themselves on the losing side of FLI's arguments. In the EU bubble, the arrival of a group whose actions are colored by fear of AI-triggered catastrophe rather than run-of-the-mill consumer protection concerns was received like a spaceship alighting in the Schuman roundabout. Some worry that the institute embodies a techbro-ish anxiety about low-probability threats that could divert attention from more immediate problems.
- North America > United States > California (0.26)
- Europe > Estonia > Harju County > Tallinn (0.06)
- Asia > China (0.05)
- Information Technology (1.00)
- Government (1.00)
A new type of powerful artificial intelligence could make EU's new law obsolete
The EU's proposed artificial intelligence act fails to fully take into account the recent rise of an ultra-powerful new type of AI, meaning the legislation will rapidly become obsolete as the technology is deployed in novel and unexpected ways. Foundation models trained on gargantuan amounts of data by the world's biggest tech companies, and then adapted to a wide range of tasks, are poised to become the infrastructure on which other applications are built. That means any deficits in these models will be inherited by all uses to which they are put. The fear is that foundation models could irreversibly embed security flaws, opacity and biases into AI. One study found that a model trained on online text replicated the prejudices of the internet, equating Islam with terrorism, a bias that could pop up unexpectedly if the model was used in education, for example.
- Instructional Material (0.56)
- Research Report (0.34)
- Government (1.00)
- Law > Statutes (0.69)
- Information Technology > Security & Privacy (0.68)
- Law Enforcement & Public Safety > Terrorism (0.49)
- Leisure & Entertainment (0.36)
- Machinery > Industrial Machinery (0.31)
Training Artificial Intelligence to Compromise - Future of Life Institute
Imagine you're sitting in a self-driving car that's about to make a left turn into on-coming traffic. One small system in the car will be responsible for making the vehicle turn, one system might speed it up or hit the brakes, other systems will have sensors that detect obstacles, and yet another system may be in communication with other vehicles on the road. Each system has its own goals -- starting or stopping, turning or traveling straight, recognizing potential problems, etc. -- but they also have to all work together toward one common goal: turning into traffic without causing an accident. Harvard professor and FLI researcher, David Parkes, is trying to solve just this type of problem. Parkes told FLI, "The particular question I'm asking is: If we have a system of AIs, how can we construct rewards for individual AIs, such that the combined system is well behaved?"
- Transportation > Ground > Road (0.70)
- Transportation > Passenger (0.50)
Benefits & Risks of Artificial Intelligence - FLI - Future of Life Institute
Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons. In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task.