lethal autonomous weapon system
Halt AI research? Doctors, public health experts call unchecked AI 'existential threat to humanity'
Medical experts have issued a fresh call to halt the development of artificial intelligence (AI), warning it poses an'existential threat' to people. A team of five doctors and global health policy experts from across four continents said there were three ways in which the tech could wipe out humans. First is the risk that AI will help amplify authoritarian tactics like surveillance and disinformation. 'The ability of AI to rapidly clean, organise and analyse massive data sets consisting of personal data, including images collected by the increasingly ubiquitous presence of cameras,' they say, could make it easier for authoritarian or totalitarian regimes to come to power and stay in power. Second, the group warns that AI can accelerate mass murder via the expanded use of Lethal Autonomous Weapon Systems (LAWS).
- North America > United States > California (0.05)
- Asia > China (0.05)
- Health & Medicine > Public Health (0.74)
- Health & Medicine > Consumer Health (0.53)
Kamikaze Drones in Russia's War Against Ukraine Point to Future "Killer Robots"
Editorial note: due to the time-sensitive nature of this topic, we are releasing this to all readers immediately rather than only to our paying subscribers. This is a first in a series of articles covering the impact of AI in Russia's war against Ukraine; subscribe to read future ones. You can support Ukraine with these highly rated charities. "Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world."[1]
- Government > Military (1.00)
- Government > Regional Government > Europe Government (0.48)
Artificial Intelligence Creeps on to the African Battlefield
In addition to the growing use of AI within surveillance systems across Africa, AI has also been integrated into weapon systems. Most prominently, lethal autonomous weapons systems use real-time sensor data coupled with AI and machine learning algorithms to "select and engage targets without further intervention by a human operator." Depending on how that definition is interpreted, the first use of a lethal autonomous weapon system in combat may have taken place on African soil in March 2020. That month, logistics units belonging to the armed forces of the Libyan warlord Khalifa Haftar came under attack by Turkish-made STM Kargu-2 drones as they fled Tripoli. According to a United Nations report, the Kargu-2 represented a lethal autonomous weapons system because it had been "programmed to attack targets without requiring data connectivity between the operator and munition."
Normative Epistemology for Lethal Autonomous Weapons Systems
The rise of human-information systems, cybernetic systems, and increasingly autonomous systems requires the application of epistemic frameworks to machines and human-machine teams. This chapter discusses higher-order design principles to guide the design, evaluation, deployment, and iteration of Lethal Autonomous Weapons Systems (LAWS) based on epistemic models. Epistemology is the study of knowledge. Epistemic models consider the role of accuracy, likelihoods, beliefs, competencies, capabilities, context, and luck in the justification of actions and the attribution of knowledge. The aim is not to provide ethical justification for or against LAWS, but to illustrate how epistemological frameworks can be used in conjunction with moral apparatus to guide the design and deployment of future systems. The models discussed in this chapter aim to make Article 36 reviews of LAWS systematic, expedient, and evaluable. A Bayesian virtue epistemology is proposed to enable justified actions under uncertainty that meet the requirements of the Laws of Armed Conflict and International Humanitarian Law. Epistemic concepts can provide some of the apparatus to meet explainability and transparency requirements in the development, evaluation, deployment, and review of ethical AI.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.08)
- North America > United States > New York (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- (16 more...)
- Law (1.00)
- Government > Military (1.00)
AI drone may have 'hunted down' and killed soldiers in Libya without human input
AI drone may have'hunted down' and killed soldiers in Libya without human input By Charles Q. Choi - Live Science Contributor - June 3, 2021 KARGU a Rotary Wing Attack Drone Loitering Munition System A UN report suggests that at least one autonomous drone operated by artificial intelligence (AI) may have killed people for the first time last year in Libya, without any humans consulted prior to the attack, according to a U.N. report. According to a March report from the U.N. Panel of Experts on Libya, lethal autonomous aircraft may have "hunted down and remotely engaged" soldiers and convoys fighting for Libyan general Khalifa Haftar. It's not clear who exactly deployed these killer robots, though remnants of one such machine found in Libya came from the Kargu-2 drone, which is made by Turkish military contractor STM. Landmines are essentially simple autonomous weapons -- you step on them and they blow up," Zachary Kallenborn, a research affiliate with the National Consortium for the ...
That AI scanning your X-ray for signs of COVID-19 may just be looking at your age
In brief Machines are like humans – they're lazy. When given the chance to take the easy route to complete an easy task, they will. Academics at the University of Washington found that algorithms trained to diagnose COVID-19 from chest X-rays often look at secondary features, such as a patient's age, rather than focusing on the images themselves – something known as shortcut learning. "A physician would generally expect a finding of COVID-19 from an X-ray to be based on specific patterns in the image that reflect disease processes," said Alex DeGrave, a medical science student at the American university and co-author of a paper published this week in Nature Intelligence. "But, rather than relying on those patterns, a system using shortcut learning might, for example, judge that someone is elderly and, thus, infer that they are more likely to have the disease because it is more common in older patients. The shortcut is not wrong per se, but the association is unexpected and not transparent. And, that could lead to an inappropriate diagnosis."
- Government > Military (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.47)
- Health & Medicine > Therapeutic Area > Immunology (0.47)
Have autonomous robots started killing in war? The reality is messier than it appears
It's the sort of thing that can almost pass for background noise these days: over the past week, a number of publications tentatively declared, based on a UN report from the Libyan civil war, that killer robots may have hunted down humans autonomously for the first time. As one headline put it: "The Age of Autonomous Killer Robots May Already Be Here." As you might guess, it's a hard question to answer. The new coverage has sparked a debate among experts that goes to the heart of our problems confronting the rise of autonomous robots in war. Some said the stories were wrongheaded and sensational, while others suggested there was a nugget of truth to the discussion.
A U.N. Report Suggests Libya Saw The First Battlefield Killing By An Autonomous Drone
A company-provided photo of a Kargu Rotary Wing Attack Drone Loitering Munition System manufactured by the STM defense company of Turkey. A U.N. report says the weapons system was used in Libya in March 2020. A company-provided photo of a Kargu Rotary Wing Attack Drone Loitering Munition System manufactured by the STM defense company of Turkey. A U.N. report says the weapons system was used in Libya in March 2020. Military-grade autonomous drones can fly themselves to a specific location, pick their own targets and kill without the assistance of a remote human operator.
- Africa > Middle East > Libya (1.00)
- Asia > Middle East (0.75)
Military drones may have attacked humans for first time without being instructed to, UN report says
A military drone may have autonomously attacked humans for the first time without being instructed to do so, according to a recent report by the UN Security Council. The report, published in March, claimed that the AI drone – Kargu-2 quadcopter – produced by Turkish military tech company STM, attacked retreating soldiers loyal to Libyan General Khalifa Haftar. The 548-page report by the UN Security Council's Panel of Experts on Libya has not delved into details on if there were any deaths due to the incident, but it raises questions on whether global efforts to ban killer autonomous robots before they are built may be futile. Over the course of the year, the UN-recognized Government of National Accord pushed the Haftar Affiliated Forces (HAF) back from the Libyan capital Tripoli, and the drone may have been operational since January 2020, the experts noted. "Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2," the UN report noted.
Killer Drone Autonomously 'Hunted Down' a Human Target, UN Experts Say
A "lethal" weaponized drone "hunted down" and "remotely engaged" human targets without its handlers' say-so during a conflict in Libya last year, according to a United Nations report first covered by New Scientist this week. Whether there were any casualties remains unclear, but if confirmed, it would likely be the first recorded death carried out by an autonomous killer robot. In March 2020, a Kargu-2 attack quadcopter, which the agency called a "lethal autonomous weapon system," targeted retreating soldiers and convoys led by Libyan National Army's Khalifa Haftar during a civil conflict with Libyan government forces. "The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true'fire, forget and find' capability," the UN Security Council's Panel of Experts on Libya wrote in the report. It remains unconfirmed whether any soldiers were killed in the attack, although the UN experts imply as much.