paul scharre
AI Spotlight: Paul Scharre On Weapons, Autonomy, And Warfare
Paul Scharre is a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security. He is the award-winning author of Army of None: Autonomous Weapons and the Future of War, which won the 2019 Colby Award and was named one of Bill Gates' top five books of 2018. Aswin Pranam: To start, what classifies as an autonomous weapon? Paul Scharre: An autonomous weapon, quite simply, makes its own decisions of whom to engage in the battlefield. The core challenge is in figuring out which of those decisions matter.
The civilian private sector: part of a new arms control regime? ORF
Four years ago, I stood in the darkened operations center in front of a wall of blinking screens, arms crossed and squinting at video footage on one of them. The commander asked me for the second time, signaling toward the figure on the screen. I looked over and reviewed a mental checklist of the individual's pattern of life over more than a decade. I weighed this against his latest movements, reflected on the screen in real time. The commander took a step toward me and started again, "Kara. We are running out of time. I had a decision to make. Using a machine to determine the validity of the target and take action is a nonstarter. But not everyone agrees on the details. Though the machines I dealt with that day were only semi-autonomous, it is not difficult to imagine a world where fully autonomous weapons are programmed to make a lethal decision. Institutions, countries, industry, and society must choose when and how to govern this technology in today's world, where semi-autonomous ...
- Asia > South Korea (0.14)
- Europe > Russia (0.05)
- Asia > Russia (0.05)
- (32 more...)
- Media (1.00)
- Law (1.00)
- Information Technology (1.00)
- (4 more...)
A Tech Group Suggests Limits for the Pentagon's Use of AI
The Pentagon says artificial intelligence will help the US military become still more powerful. On Thursday, an advisory group including executives from Google, Microsoft, and Facebook proposed ethical guidelines to prevent military AI from going off the rails. The advice came from the Defense Innovation Board, created under the Obama administration to help the Pentagon tap tech industry expertise, and chaired by Eric Schmidt, Google's former CEO and chairman. Last year, the department asked the group to develop ethical principles for its AI projects. On Thursday, the group released a set of proposed principles in a report that praises the power of military AI while also warning about unintended harms or conflict.
- North America > United States (0.76)
- Europe > Russia (0.05)
- Asia > Russia (0.05)
- Asia > China (0.05)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.76)
- Information Technology > Communications > Social Media (0.47)
- Information Technology > Artificial Intelligence > Applied AI (0.41)
Misinformation woes could multiply with 'deepfake' videos
If you see a video of a politician speaking words he never would utter, or a Hollywood star improbably appearing in a cheap adult movie, don't adjust your television set -- you may just be witnessing the future of'fake news.' 'Deepfake' videos that manipulate reality are becoming more sophisticated due to advances in artificial intelligence, creating the potential for new kinds of misinformation with devastating consequences. As the technology advances, worries are growing about how deepfakes can be used for nefarious purposes by hackers or state actors. Paul Scharre of the Center for a New American Security looks at a'deepfake' video of former US President Barack Obama manipulated to show him speaking words from actor Jordan Peele on January 24, 2019, in Washington'We're not quite to the stage where we are seeing deepfakes weaponized, but that moment is coming,' Robert Chesney, a University of Texas law professor who has researched the topic, told AFP. Chesney argues that deepfakes could add to the current turmoil over disinformation and influence operations. 'A well-timed and thoughtfully scripted deepfake or series of deepfakes could tip an election, spark violence in a city primed for civil unrest, bolster insurgent narratives about an enemy's supposed atrocities, or exacerbate political divisions in a society,' Chesney and University of Maryland professor Danielle Citron said in a blog post for the Council on Foreign Relations.
- North America > United States > Texas (0.25)
- North America > United States > Maryland (0.25)
- North America > United States > Oklahoma > Payne County > Cushing (0.06)
- North America > United States > New York (0.05)
- Media > News (1.00)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
- (2 more...)
Elon Musk, DeepMind founders, and others sign pledge to not develop lethal AI weapon systems
Tech leaders, including Elon Musk and the three co-founders of Google's AI subsidiary DeepMind, have signed a pledge promising to not develop "lethal autonomous weapons." It's the latest move from an unofficial and global coalition of researchers and executives that's opposed to the propagation of such technology. The pledge warns that weapon systems that use AI to "[select] and [engage] targets without human intervention" pose moral and pragmatic threats. Morally, the signatories argue, the decision to take a human life "should never be delegated to a machine." On the pragmatic front, they say that the spread of such weaponry would be "dangerously destabilizing for every country and individual."
Relax, Google, the Robot Army Isn't Here Yet
People can differ on their perceptions of "evil." People can also change their minds. Still, it's hard to wrap one's head around how Google, famous for its "don't be evil" company motto, dealt with a small Defense Department contract involving artificial intelligence. Facing a backlash from employees, including an open letter insisting the company "should not be in the business of war," Google in April grandly defended involvement in a project "intended to save lives and save people from having to do highly tedious work." Less than two months later, chief executive officer Sundar Pichai announced that the contract would not be renewed, writing equally grandly that Google would shun AI applications for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."
- Asia > India (0.14)
- North America > United States > California (0.04)
- Europe > Russia (0.04)
- (4 more...)
- Government > Military (1.00)
- Information Technology (0.95)
- Government > Regional Government > North America Government > United States Government (0.95)
War Machines: Artificial Intelligence in Conflict
Having invented the first machine gun, Richard John Gatling explained (or at least justified) his invention in a letter to a friend in 1877: With such a machine, it would be possible to replace 100 men with rifles on the battlefield, greatly reducing the number of men injured or killed. This sentiment, replacing soldiers--or at least protecting them from harm to the greatest extent possible through the inventions of science and technology--has been a thoroughly American ambition since the Civil War. And now, with developments in computing, artificial intelligence and robotics, it may soon be possible to replace soldiers entirely. Only this time America is not alone and may not even be in the lead. Many countries in the world today, including Russia and China, are believed to be developing weapons that will have the ability to operate autonomously--discover a target, make the decision to engage and then attack, without human intervention.
- Europe > Russia (0.25)
- Asia > Russia (0.25)
- Asia > China (0.25)
- North America > United States (0.15)
Artificial Intelligence and Global Security Summit
The Artificial Intelligence and Global Security Summit will bring together technology leaders and top policymakers to explore the state of artificial intelligence and discuss the implications of the AI revolution on global security. Past industrial revolutions led to changes in the balance of power between nations and even the fundamental building blocks of power, with coal- and steel-producing nations benefitting and oil becoming a global strategic resource. The AI revolution has similar transformative potential to alter power dynamics, the character of conflict, and strategic stability among nations and private actors. The United States must anticipate these changes and capitalize on opportunities to stay ahead of competitors. To anticipate these challenges, CNAS' all-day summit will explore technology trends, uncertainties, and possible trajectories for how AI may affect global security.
- North America > United States > Wyoming (0.06)
- North America > United States > Tennessee (0.06)
- Information Technology > Security & Privacy (0.78)
- Government > Military (0.60)
- Education > Educational Setting > Higher Education (0.33)