Automated recognition and talent management are on the list of the US Army's AI Task Force, as a result of its year-long collaboration with the Carnegie Mellon University on incorporating Artificial Intelligence (AI) into US Army systems. The task force's access to sensors, different types of electro-mechanical devices, and computing capabilities are enabling them to create AI for other applications as per the plan in the 2018 directive, which states, "The Army is establishing the Army-AI Task Force (A-AI TF) that will narrow an existing AI capability gap by leveraging current technological applications to enhance our warfighters, preserve peace, and, if required, fight to win." Five university staffers at National Robotics Engineering Center, an integral part of CMU's Robotics Institute, have formed an AI Hub to work directly with the Army task force. The task force is starting to fill gaps in its systems with AI. In March, the US Army invested US$72 million in a five-year AI fundamental research effort to research and discover capabilities for augmenting military personnel, optimizing operations, increasing readiness, and reducing casualties. According to the Combat Capabilities Development Command Army Research Laboratory, which is the US Army's corporate laboratory (ARL), in March, CMU will lead a consortium of multiple universities to work in collaboration with the Army lab to accelerate R&D of advanced algorithms, autonomy and AI to enhance national security and defense.
WASHINGTON – Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons. Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future. "Why are companies like Microsoft and Amazon not denying that they're currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?" The use of AI to allow weapon systems to autonomously select and attack targets has sparked ethical debates in recent years, with critics warning they would jeopardize international security and herald a third revolution in warfare after gunpowder and the atomic bomb. A panel of government experts debated policy options regarding lethal autonomous weapons at a meeting of the United Nations Convention on Certain Conventional Weapons in Geneva on Wednesday.
The Cannon-Delivered Area Effects Munitions (C-DAEM) is a new 155-millimeter artillery round in development for the Army's M777 howitzer, M109A6 Paladin self-propelled howitzer and new XM1299 self-propelled howitzer. The high-tech shell will be able to guide itself toward its intended target, even in areas where GPS is jammed by enemy forces. The munition, which has a 43-mile range, will take more than a minute to reach its target, and can slow down and guide itself on the way. By doing so, it makes it easier for the Army to hit targets that move around, like vehicles and infantry - although it can't hit a moving target yet. Popular Mechanics notes that C-DAEM will replace the dual purpose improved conventional munition (DPICM), a type of cluster munition that made up for a lack of precision accuracy by scattering bomblets above the battlefield, ensuring it would at least do some damage to its target even if it didn't hit it directly.
Artificial intelligence may soon be deciding who lives or dies. The US Army wants to build smart missiles that will use AI to select their targets, out of reach of human oversight. The project has raised concerns that the missiles will be a form of lethal autonomous weapon – a technology many people are campaigning to ban. The US Army's project is called Cannon-Delivered Area Effects Munition (C-DAEM). Companies will bid for the contract to build the weapon, with the requirements stating it should be able to hit "moving and imprecisely located armoured targets" …
File photo - M1A1 Abrams main battle tanks assigned to 3rd Battalion, 67th Armored Regiment, 2nd Armored Brigade Combat Team, 3rd Infantry Division stage prior to a tactical movement during Spartan Focus, at Fort Stewart, Ga. When dismounted U.S. Army infantry are attacking fortified enemy positions, taking hostile fire and moving quickly to find the best points for continued assault -- "battery life" can determine mission success or failure and even -- life or death. Units of forward positioned Army soldiers may not have quick access to battery recharging and may, therefore, depend entirely upon the functionality of their batteries - needed to power night vision, radios, small soldier-worn sensors, portable laptops for drone control and other combat-essential items. Without the requisite battery power to advance, soldiers might be forced to retreat or, of even greater consequence, become far more vulnerable to enemy fire. It goes without saying that attacking soldiers, especially those on the move on foot, need lightweight, electrically powered equipment for communications, data sharing, enemy tracking, targeting and some weaponry.
ELYAKIM ARMY BASE, ISRAEL – Israel, a world leader in hi-tech, is around 30 years away from its ambition of deploying robot forces, and for now will chose between three prototypes of semi-automated armored vehicles to cocoon its troops in battle, defense officials said on Sunday. Israel has long eyed a future robot army as a means of reducing the use of soldiers on its combustible fronts with Gaza, Lebanon and Syria, just as its air force has increasingly relied on pilot-less drones. The country draws most of its military personnel from teenage conscripts. An unveiling of Israel's newly developed operating suites for ground fighting vehicles made clear it plans to keep soldiers at the controls, albeit entirely insulated from the outside: Hatches battened, the cabins will have smart-screens, fed by outside cameras and sensors, instead of windows or ports. "Now the people will be sitting in the tank, it's closed, they are far better protected, and they can advance without worrying about snipers or other things," said Brig.
Are technology companies running too fast into the future and creating things that could potentially wreak havoc on humankind? That question has been swirling around in my head ever since I saw the enthralling science-fiction film "Ex Machina." The movie offers a clever version of the robots versus humans narrative. But what makes "Ex Machina" different from the usual special-effects blockbuster is the ethical questions it poses. Foremost among them is something that most techies don't seem to want to answer: Who is making sure that all of this innovation does not go drastically wrong?
SINGAPORE - The use of artificial intelligence (AI) will not make soldiers obsolete, but it might redefine their roles in the future, said expert panellists at a technology summit on Thursday (June 27). In fact, soldiers could become more proficient if they are technically skilled and are able to make use of AI systems to their advantage, they added. Speaking at the second Singapore Defence Technology Summit, Israeli computer scientist Yaniv Altshuler said it is "not so unlikely" to foresee a future where soldiers operate a swarm of armoured, autonomous, or semi-autonomous vehicles remotely. "Now, you see soldiers or pilots actually graduating from flying academies and they are flying drones. Do you call them pilots, or are pilots now redundant?... I think the same would be with soldiers," he said.
The Pentagon is set to award a $10bn "war cloud" contract to a technology company next month, with both Amazon and Microsoft competing for the chance to build a military-grade AI computing system. The Joint Enterprise Defence Infrastructure (Jedi) plan faces a number of obstacles before the US defence department makes its decision next month, not least from within the companies' own work forces. Microsoft employees published an open letter on Medium last year, pleading the tech giant to not bid on the Jedi contract. "Many Microsoft employees don't believe that what we build should be used for waging war," they wrote. We'll tell you what's true.
The MIT Machine Intelligence Community began with a few friends meeting over pizza to discuss landmark papers in machine learning. Three years later, the undergraduate club boasts 500 members, an active Slack channel, and an impressive lineup of student-led reading groups and workshops meant to demystify machine learning and artificial intelligence (AI) generally. This year, MIC and MIT Quest for Intelligence joined forces to advance their common cause of making AI tools accessible to all. Starting last fall, the MIT Quest opened its offices to MIC members and extended access to IBM and Google-donated cloud credits, providing a boost of computing power to students previously limited to running their AI models on desktop machines loaded with extra graphics processors. The MIT Quest and MIC are now collaborating on a host of projects, independently and through MIT's Undergraduate Research Opportunities Program (UROP).