military


In Army of None, a field guide to the coming world of autonomous warfare

#artificialintelligence

The Silicon Valley-military industrial complex is increasingly in the crosshairs of artificial intelligence engineers. A few weeks ago, Google was reported to be backing out of a Pentagon contract around Project Maven, which would use image recognition to automatically evaluate photos. Earlier this year, AI researchers around the world joined petitions calling for a boycott of any research that could be used in autonomous warfare. For Paul Scharre, though, such petitions barely touch the deep complexity, nuance, and ambiguity that will make evaluating autonomous weapons a major concern for defense planners this century. In Army of None, Scharre argues that the challenges around just the definitions of these machines will take enormous effort to work out between nations, let alone handling their effects.


Microsoft's Ethical Reckoning Is Here

WIRED

Microsoft has become the latest company dragged into the tech industry's ethical reckoning over the use of its products by government agencies. On Sunday, critics noted a blog post from January in which Microsoft touted its work with US Immigration and Customs Enforcement (ICE). The post celebrated a government certification that allowed Microsoft Azure, the company's cloud-computing platform, to handle sensitive unclassified information for ICE. The sales-driven blog post outlined ways that ICE might use Azure Government, including enabling ICE employees to "utilize deep learning capabilities to accelerate facial recognition and identification," Tom Keane, a general manager at Microsoft wrote. "The agency is currently implementing transformative technologies for homeland security and public safety, and we're proud to support this work with our mission-critical cloud," the post added.


Relax, Google, the Robot Army Isn't Here Yet

#artificialintelligence

People can differ on their perceptions of "evil." People can also change their minds. Still, it's hard to wrap one's head around how Google, famous for its "don't be evil" company motto, dealt with a small Defense Department contract involving artificial intelligence. Facing a backlash from employees, including an open letter insisting the company "should not be in the business of war," Google in April grandly defended involvement in a project "intended to save lives and save people from having to do highly tedious work." Less than two months later, chief executive officer Sundar Pichai announced that the contract would not be renewed, writing equally grandly that Google would shun AI applications for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."


Google CEO bans autonomous weapons in new AI guidelines

#artificialintelligence

Google today released guidelines for the creation of artificial intelligence, which includes a ban on making autonomous weaponry and most applications of AI with the potential to harm people. The guidelines emerge just days after Google announced it would not renew its contract with the U.S. Department of Defense to analyze drone footage. As a self-described AI-first company, proprietor of popular open source frameworks like Kaggle and TensorFlow, and employer of prominent researchers, Google is one of the most influential companies in AI. "We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue," CEO Sundar Pichai said in a blog post. In the post, Pichai spells out the principles that should be considered when creating AI, as well as applications of AI that Google will not pursue.


Google rules out using artificial intelligence for weapons

#artificialintelligence

SAN FRANCISCO (AFP) - Google announced on Thursday (June 7) it would not use artificial intelligence for weapons or to "cause or directly facilitate injury to people", as it unveiled a set of principles for the technologies. Chief executive Sundar Pichai, in a blog post outlining the company's artificial intelligence policies, noted that even though Google won't use AI for weapons, "we will continue our work with governments and the military in many other areas" such as cyber security, training, or search and rescue. The news comes with Google facing an uproar from employees and others over a contract with the US military, which the California tech giant said last week would not be renewed. Pichai set out seven principles for Google's application of artificial intelligence, or advanced computing that can simulate intelligent human behaviour. He said Google is using AI "to help people tackle urgent problems" such as prediction of wildfires, helping farmers, diagnosing disease or preventing blindness.


Google Sets Limits on Its Use of AI, but Allows Defense Work

WIRED

Earlier this year, Google CEO Sundar Pichai described artificial intelligence as more profound to humanity than fire. Thursday, after protests from thousands of Google employees over a Pentagon project, Pichai offered guidelines for how Google will--and won't--use the technology. One thing Pichai says Google won't do: work on AI for weapons. But the guidelines leave much to the discretion of company executives and allow Google to continue to work for the military. The ground rules are a response to more than 4,500 Googlers signing a letter protesting the company's involvement in a Pentagon project called Maven that uses machine learning to interpret drone surveillance video.


Drones Are Here to Stay. Get Used to It

#artificialintelligence

When Hurricane Maria hit Puerto Rico last September, it ravaged the island's electrical grid and communications systems. For weeks, many of the approximately 5 million Puerto Ricans living in the mainland U.S. were unable to reach their loved ones. While recovery groups worked to restore power and deliver aid, cell providers scrambled to repair their networks. To get its service back up and running, AT&T tried something new: the Flying COW, a tethered drone that beamed mobile data signals up to 40 miles in all directions. "As soon as we turned it on, people just started connecting to it instantly," says Art Pregler, AT&T's Unmanned Aircraft Systems program director.



Kick–starting AI in Armed Forces - SP's MAI

#artificialintelligence

News reports of May 20, 2018 have reported the government has embarked upon an ambitious defence project for incorporating artificial intelligence (AI) to enhance operational preparedness of the armed forces in a significant way. The project is to include equipping the military with unmanned tanks, vessels, aerial vehicles and robotic weaponry. The DRDO has been talking of preparing the military for robotic warfare for past several years but without much to show on ground other than few applications. DRDO's Centre for Artificial Intelligence & Robotics (CAIR) has developed a range of robots with varied applications, and is also developing: man portable Unmanned Ground Vehicle (UGV) for low intensity conflicts and surveillance in urban scenario; wall climbing and flapping wing robot; walking robot with four and six legs for logistics support; Network Traffic Analysis (NETRA) which can monitor internet traffic. But considering the pace at which developments are taking place, particularly in China in combining robotics and AI, our slow progress in this field is liable to leave us at huge asymmetric disadvantage.


Google to drop Pentagon AI contract after employees called it the 'business of war'

Washington Post

Google will not seek to extend its contract next year with the Department of Defense for artificial intelligence used to analyze drone video, squashing a controversial alliance that had raised alarms over the technological build-up between Silicon Valley and the military. The tech giant will stop working on its piece of the military's AI project known as Project Maven when its 18-month contract expires in March, a source familiar with Google's thinking told The Washington Post. Diane Greene, the chief executive of Google's influential cloud-computing business, told employees of the decision at an internal meeting Friday first reported by Gizmodo. Google, which declined to comment, has faced widespread public backlash and employee resignations for helping develop technological tools that could aid in warfighting. The source said Google would soon release new company principles related to the ethical uses of AI.