General Atomics Aeronautical Systems, Inc. (GA-ASI) has been awarded a contract by the U.S. Department of Defense's Joint Artificial Intelligence Center (JAIC) to develop enhanced autonomous sensing capabilities for unmanned aerial vehicles (UAVs). The JAIC Smart Sensor project aims to advance drone-based AI technology by demonstrating object recognition algorithms and employing onboard AI to automatically control UAV sensors and direct autonomous flight. GA-ASI will deploy these new capabilities on a MQ-9 Reaper UAV equipped with a variety of sensors, including GA-ASI's Reaper Defense Electronic Support System (RDESS) and Lynx Synthetic Aperture Radar (SAR). GA-ASI's Metis Intelligence, Surveillance and Reconnaissance (ISR) tasking and intelligence-sharing application, which enables operators to specify effects-based mission objectives and receive automatic notification of actionable intelligence, will be used to command the unmanned aircraft. J.R. Reid, GA-ASI Vice President of Strategic Development, commented: "GA-ASI is excited to leverage the considerable investment we have made to advance the JAIC's autonomous sensing objective. This will bring a tremendous increase in unmanned systems capabilities for applications across the full-range of military operations."
On Thursday, 26 November, Prof. Andrew Murray, will deliver the Sixth T.M.C. Asser Lecture – 'Almost Human: Law and Human Agency in the Time of Artificial Intelligence'. Asser Institute researcher Dr. Dimitri Van Den Meerssche had the opportunity to speak with professor Murray about his perspective on the challenges posed by Artificial Intelligence to our human agency and autonomy – the backbone of the modern rule of law. A conversation on algorithmic opacity, the peril of dehumanization, the illusionary ideal of the'human in the loop' and the urgent need to go beyond'ethics' in the international regulation of AI. One central observation in your Lecture is how Artificial Intelligence threatens human agency. Could you elaborate on your understanding of human agency and how it is being threatened? In my Lecture I refer to the definition of agency by legal philosopher Joseph Raz. He argues that to be fully in control of one's own agency and decisions you need to have capacity, the availability of options and the freedom to exercise that choice without interference. My claim is that there are four ways in which the adoption and use of algorithms affect our autonomy, and particularly Raz's third requirement: that we are to be free from coercion. First, there is an internal and positive impact. This happens when an algorithm gives us choices, which have been limited by pre-determined values – values that we cannot observe. The second impact is internal and negative. In this scenario, choices are removed because of pre-selected values.
The Joint Artificial Intelligence Center began in 2018 to accelerate the DOD's adoption and integration of artificial intelligence. From the start, it was meant to serve as an AI center of excellence and to provide resources, tools and expertise to the department. The JAIC's new director said that while the center's early efforts bore fruit, the overall effort was not transformational enough and a more aggressive approach is needed. "In JAIC 1.0, we helped jumpstart AI in the DOD through Pathfinder projects we called mission initiatives," said Marine Corps Lt. Gen. Michael S. Groen, during a briefing today at the Pentagon. "We learned a great deal and brought onboard some of the brightest talent in the business. When we took stock, however, we realized that this was not transformational enough. We weren't going to be in a position to transform the department through the delivery of use cases."
Research being conducted by the U.S. Army Combat Capabilities Development Command (DEVCOM) is focused on a new machine learning approach that could improve radar performance in congested environments. Researchers from DEVCOM, Army Research Laboratory, and Virginia Tech have developed an automatic way for radars to operate in congested and limited-spectrum environments created by commercial 4G LTE and future 5G communications systems. The researchers claim they examined how future Department of Defense radar systems will share the spectrum with commercial communications systems. The team used machine learning to learn the behavior of ever-changing interference in the spectrum and find clean spectrum to maximize the radar performance. Once clean spectrum is identified, waveforms can be modified to best fit into the spectrum.
The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won't happen--or shouldn't happen--unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur--and it's inevitable they will. That's one reason Nvidia's car is still experimental.
MIT Lincoln Laboratory has established a new research and development division, the Biotechnology and Human Systems Division. The division will address emerging threats to both national security and humanity. Research and development will encompass advanced technologies and systems for improving chemical and biological defense, human health and performance, and global resilience to climate change, conflict, and disasters. "We strongly believe that research and development in biology, biomedical systems, biological defense, and human systems is a critically important part of national and global security. The new division will focus on improving human conditions on many fronts," says Eric Evans, Lincoln Laboratory director.
The term artificial intelligence was initially revealed in 1956, yet AI has become more mainstream today on account of expanded data volumes, progressed algorithms, and enhancements in computing power and storage. During the 1960s, the US Department of Defense checked out this kind of work and started training computers to emulate fundamental human reasoning. For instance, the Defense Advanced Research Projects Agency (DARPA) finished road planning projects during the 1970s. What's more, DARPA created intelligent personal assistants in 2003, some time before Siri, Alexa or Cortana were easily recognized names. Artificial intelligence (AI), is the capacity of a digital computer or computer-controlled robot to perform activities usually connected with smart creatures.
BEGIN ARTICLE PREVIEW: The term artificial intelligence was initially revealed in 1956, yet AI has become more mainstream today on account of expanded data volumes, progressed algorithms, and enhancements in computing power and storage. Early AI research during the 1950s explored themes like problem solving and symbolic methods. During the 1960s, the US Department of Defense checked out this kind of work and started training computers to emulate fundamental human reasoning. For instance, the Defense Advanced Research Projects Agency (DARPA) finished road planning projects during the 1970s. What’s more, DARPA created intelligent personal assistants in 2003, some time before Siri, Alexa or Cortana were easily recognized names. What is Artificial
CLASSIC DOGFIGHTS, in which two pilots match wits and machines to shoot down their opponent with well-aimed gunfire, are a thing of the past. Guided missiles have seen to that, and the last recorded instance of such duelling was 32 years ago, near the end of the Iran-Iraq war, when an Iranian F-4 Phantom took out an Iraqi Su-22 with its 20mm cannon. But memory lingers, and dogfighting, even of the simulated sort in which the laws of physics are substituted by equations running inside a computer, is reckoned a good test of the aptitude of a pilot in training. And that is also true when the pilot in question is, itself, a computer program. So, when America's Defence Advanced Research Projects Agency (DARPA), an adventurous arm of the Pentagon, considered the future of air-to-air combat and the role of artificial intelligence (AI) within that future, it began with basics that Manfred von Richthofen himself might have approved of.