Goto

Collaborating Authors

autonomous weapon


International talks on rules for AI-based weapons hit snags

The Japan Times

International negotiations to regulate artificial intelligence-based weapons are encountering difficulties, with Japan, Germany and others backing international rules on regulation but maintaining a cautious stance on a treaty to prohibit killer robots. Behind their muted approach is a fear that countries that develop autonomous weapons would shun such a treaty anyway, diminishing the significance of international efforts toward any regulation. Therefore, countries differ over how to attain this objective while agreeing on the need to prevent lethal autonomous weapons from running out of control. Germany hosted an online meeting in early April amid the COVID-19 pandemic to facilitate talks on the control of killer robots, as promoted by the U.N. Convention on Certain Conventional Weapons (CCW). Representatives of more than 60 countries and regions, including the United States and Israel, both developers of AI weapons, the European Union and the United Nations, as well as nongovernmental organizations, logged in to participate in the forum.


500

#artificialintelligence

This is just an image representation. Let's talk about this topic in detail... The immense capabilities artificial intelligence is bringing to the world would have been inconceivable to past generations. But even as we marvel at the incredible power these new technologies afford, we're faced with complex and urgent questions about the balance of benefit and harm. When most people ponder whether AI is good or evil, what they're essentially trying to grasp is whether AI is a tool or a weapon.


Episode 34 Balancing AI: Privacy, Misuse, Ethics and the Future - F-Secure Blog

#artificialintelligence

While AI and machine learning are enabling definite advances in the digital world, these technologies are also raising privacy and ethical concerns. What does AI mean for personal privacy, and is it being exploited unethically? Are these concerns being addressed, or will AI spell disaster for society? Bernd Stahl is coordinator of the EU's SHERPA project, a consortium that investigates the impact of AI on ethics and human rights. Bernd stopped by for episode 34 of Cyber Security Sauna to discuss the delicate balance of AI – its advantages and disadvantages, potential misuses and how AI may improve life and create opportunity for some, while others may be hurt by algorithmic biases and unemployment. Listen, or read on for the transcript. And don't forget to subscribe, rate and review! Janne: So Bernd, how would you frame the work that the SHERPA project is doing? Bernd: SHERPA is trying to explore which ethical issues arise due to the use of AI. We're looking at human rights components in a variety of ways, and we are, as part of the overall work of the project, trying to explore which options of addressing possible ethical and human rights issues exist, which ones of those are important, and which ones of those need to be emphasized. Overall we hope to come up with a set of recommendations and proposals for the European Commission, but also for other stakeholders, that will help them deal with any issues that they may encounter.


AI Spotlight: Paul Scharre On Weapons, Autonomy, And Warfare

#artificialintelligence

Paul Scharre is a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security. He is the award-winning author of Army of None: Autonomous Weapons and the Future of War, which won the 2019 Colby Award and was named one of Bill Gates' top five books of 2018. Aswin Pranam: To start, what classifies as an autonomous weapon? Paul Scharre: An autonomous weapon, quite simply, makes its own decisions of whom to engage in the battlefield. The core challenge is in figuring out which of those decisions matter.


Rise of the killer robots: The future of war

#artificialintelligence

The recent tit-for-tat missile strikes between US and Iran show how war has changed in the 21st century. Technology has brought new capabilities for killing at a distance, and what we are seeing today with long-range, so-called "precision missiles" is a harbinger of the next generation of warheads. Autonomous weaponry and "killer robots" sound like the stuff of science fiction but various governments including the US and Russia are investing heavily in their development. Turkey has teamed up with a defence contractor to deploy kamikaze drones with biometric facial recognition to the Syrian border this year, while the Israeli-developed Harpy "loitering munition" – which hangs about in the sky looking for an unrecognised radar signal to strike – has been sold to several countries including India and China. For cloud computing expert Laura Nolan, this issue became personal in early 2018 when, while working for Google, she discovered the tech giant had secretively signed up to the US military's artificial intelligence project Maven.


The Risks of Artificial Intelligence

#artificialintelligence

Last March, at the South by Southwest tech conference in Austin, Texas, Tesla and SpaceX founder Elon Musk issued a friendly warning: "Mark my words," he said, billionaire casual in a furry-collared bomber jacket and days' old scruff, "AI is far more dangerous than nukes." No shrinking violet, especially when it comes to opining about technology, the outspoken Musk has repeated a version of these artificial intelligence premonitions in other settings as well. "I am really quite close… to the cutting edge in AI, and it scares the hell out of me," he told his SXSW audience. "It's capable of vastly more than almost anyone knows, and the rate of improvement is exponential." Musk, though, is far from alone in his exceedingly skeptical (some might say bleakly alarmist) views. A year prior, the late physicist Stephen Hawking was similarly forthright when he told an audience in Portugal that AI's impact could be cataclysmic unless its rapid development is strictly and ethically controlled.


When AI is a tool and when it's a weapon

#artificialintelligence

The immense capabilities artificial intelligence is bringing to the world would have been inconceivable to past generations. But even as we marvel at the incredible power these new technologies afford, we're faced with complex and urgent questions about the balance of benefit and harm. When most people ponder whether AI is good or evil, what they're essentially trying to grasp is whether AI is a tool or a weapon. Of course, it's both -- it can help reduce human toil, and it can also be used to create autonomous weapons. Either way, the ensuing debates touch on numerous unresolved questions and are critical to paving the way forward.


Not smart enough: The poverty of European military thinking on artificial intelligence

#artificialintelligence

"Artificial intelligence" (AI) has become one of the buzzwords of the decade, as a potentially important part of the answer to humanity's biggest challenges in everything from addressing climate change to fighting cancer and even halting the ageing process. It is widely seen as the most important technological development since the mass use of electricity, one that will usher in the next phase of human evolution. At the same time, some warnings that AI could lead to widespread unemployment, rising inequality, the development of surveillance dystopias, or even the end of humanity are worryingly convincing. States would, therefore, be well advised to actively guide AI's development and adoption into their societies. For Europe, 2019 was the year of AI strategy development, as a growing number of EU member states put together expert groups, organised public debates, and published strategies designed to grapple with the possible implications of AI. European countries have developed training programmes, allocated investment, and made plans for cooperation in the area. Next year is likely to be an important one for AI in Europe, as member states and the European Union will need to show that they can fulfil their promises by translating ideas into effective policies. But, while Europeans are doing a lot of work on the economic and societal consequences of the growing use of AI in various areas of life, they generally pay too little attention to one aspect of the issue: the use of AI in the military realm. Strikingly, the military implications of AI are absent from many European AI strategies, as governments and officials appear uncomfortable discussing the subject (with the exception of the debate on limiting "killer robots"). Similarly, the academic and expert discourse on AI in the military also tends to overlook Europe, predominantly focusing on developments in the US, China, and, to some extent, Russia. This is likely because most researchers consider Europe to be an unimportant player in the area.


AI expert warns against 'racist and misogynist algorithms'

Daily Mail - Science & tech

A leading expert in artificial intelligence has issued a stark warning against the use of race- and gender-biased algorithms for making critical decisions. Across the globe, algorithms are beginning to oversee various processes from job applications and immigration requests to bail terms and welfare applications. Military researchers are even exploring whether facial recognition technology could enable autonomous drones to identify their own targets. However, University of Sheffield computer expert Noel Sharkey told the Guardian that such algorithms are'infected with biases' and cannot be trusted. Calling for a halt on all AI with the potential to change people's lives, Professor Sharkey instead advocates for vigorous testing before they are used in public.


AI expert calls for end to UK use of 'racially biased' algorithms

The Guardian

An expert on artificial intelligence has called for all algorithms that make life-changing decisions – in areas from job applications to immigration into the UK – to be halted immediately. Prof Noel Sharkey, who is also a leading figure in a global campaign against "killer robots", said algorithms were so "infected with biases" that their decision-making processes could not be fair or trusted. A moratorium must be imposed on all "life-changing decision-making algorithms" in Britain, he said. Sharkey has suggested testing AI decision-making machines in the same way as new pharmaceutical drugs are vigorously checked before they are allowed on to the market. In an interview with the Guardian, the Sheffield University robotics/AI pioneer said he was deeply concerned over a series of examples of machine-learning systems being loaded with bias.