Raytheon's intelligence and space business is partnering with C3.ai, a software company known for its predictive maintenance business with the U.S. Air Force, the companies announced Monday. The alliance between C3.ai and Raytheon Intelligence and Space aims to speed up artificial intelligence adoption across the U.S. military. The partnership will pair Raytheon's expertise in the defense and aerospace sector with C3.ai's artificial intelligence development and applications. "The military and intelligence community have access to more data now than any time in history, but it's more than they're able to make quick use of," said David Appel, vice president of defense and civil solutions for space and C2 systems under Raytheon Intelligence and Space. "Artificial intelligence can be used to help them make sense of that data, which will allow them to make smarter decisions faster on the battlefield.
Over the past decade, artificial intelligence (AI) has experienced a renaissance. AI enables machines to learn and make decisions without being explicitly programmed. AI has enabled a new generation of applications, opening the door to breakthroughs in many aspects of daily life. From situational awareness to threat detection, online signals to system assurance, PNNL is advancing the frontiers of scientific research and national security by applying AI to scientific problems. For machine learning models, domain-specific knowledge can enhance domain-agnostic data in terms of accuracy, interpretability, and defensibility. PNNL's AI research has been applied across a variety of domain areas from national security, to the electric grid and Earth systems.
Following the revolutions in military affairs brought about by gunpowder and nuclear weapons, we find ourselves once again at the dawn of a new era of warfare: The Age of Autonomous Systems. Using cutting-edge technologies for military purposes, especially from the field of Artificial Intelligence, will radically transform how wars will be fought in the near future. LAWS (Lethal Autonomous Weapon Systems) is a critical acronym to understand warfare in the 21st century. LAWS encompass any weapon system with autonomy in its critical functions, namely one which can select (i.e., search for or detect, identify, track, and select) and attack (i.e., use force against, neutralise, damage or destroy) targets without human intervention. While technically accurate, 'LAWS' is admittedly a less emphatic term than that used by a global coalition of Human Rights Watch-coordinated non-governmental organisations formed in October 2012 who are working to fully ban LAWS -- or as they call them, 'Killer Robots'.
According to a report on the website of the National Institute of Standards and Technology on November 24, a multi-institutional team from the National Institute of Standards and Technology, the University of Maryland and the Stanford Linear Accelerator Center (SLAC) of the U.S. Department of Energy has developed a closed-loop material exploration and optimization based on artificial intelligence The system (CAMEO) algorithm aims to use the self-learning characteristics of the algorithm to discover complex new materials with specific properties through fewer experiments, to help scientists minimize the time of trial and error in experiments and improve the efficiency of new material development. The research team connected the X-ray diffraction equipment to a computer equipped with the CAMEO algorithm and imported the existing material database into the algorithm. After many iterations of learning, only a small amount of routine measurement can be used to find The best material for specific properties. Using this method, researchers discovered new nanocomposite phase change memory materials among 177 possible materials. The number of test iterations required was reduced to 1/10 of the original, and the time required was shortened from 90 hours.
In recent years, the media have been paying increasing attention to adversarial examples, input data such as images and audio that have been modified to manipulate the behavior of machine learning algorithms. Stickers pasted on stop signs that cause computer vision systems to mistake them for speed limits; glasses that fool facial recognition systems, turtles that get classified as rifles -- these are just some of the many adversarial examples that have made the headlines in the past few years. There's increasing concern about the cybersecurity implications of adversarial examples, especially as machine learning systems continue to become an important component of many applications we use. AI researchers and security experts are engaging in various efforts to educate the public about adversarial attacks and create more robust machine learning systems. Among these efforts is adversarial.js,
"These end-to-end voice translation system uses Automatic Speech Recognition (ASR), Machine Translation and Speech-to-Text to convert Mandarin to English and is designed to help armed forces, intelligence agencies and local law enforcement authorities in improving communication systems and giving substantial leeway to the Indian defense forces," the company in its statement said. The solution has a wide range of applications that includes cross border intelligence, voice surveillance, monitoring telephone/internet conversations, intercepting Radio/Satellite communication, and to bridge interactions during border meetings & joint exercises. Its unique features include noise reduction, dialect/accent detection, and support for all audio file formats. Speaking on the launch, Ananth Nagaraj, Co-founder & CTO, Gnani.ai said, "AI-based Speech Recognition technology is a necessity and is quickly making its way in becoming part of modern warfare. We believe AI has the potential to transform and improve the communication systems and will help strengthen Indian Armed forces." "Understanding linguistic nuances such as phoneme and dialects is a challenge especially when it comes to Mandarin.
Military and defense organizations using transformative technologies such as artificial intelligence and machine learning can realize tremendous gains and help to maintain advantages over increasingly capable adversaries and competitors. It can allow autonomous vehicles to go into terrain deemed too dangerous for humans, provide predictive analytics and maintenance to keep large fleets running smoothly and safely, and help to provide autonomous operations in difficult conditions. As the US Department of Defense (DoD) increasingly adopts AI technology in a wide variety of use cases ranging from back-office functions to battlefield operations, there is a realization that despite the benefits that AI can bring, there is also a risk of unintended consequences that could cause significant harm by using these various technologies. As a result, the DoD takes the topics of topics of ethics, transparency, and ethics policy very seriously. A few years ago, the DoD created the Joint Artificial Intelligence Center, also referred to as the JAIC, to help figure out how to best move forward with this transformative technology.
By J. William Middendorf J. William Middendorf, who lives in Little Compton, served as Secretary of the Navy during the Ford administration. His recent book is "The Great Nightfall: How We Win the New Cold War."Thirteen days passed in October 1962 while President John F. Kennedy and his advisers perched at the edge of the nuclear abyss, pondering their response to the discovery of Russian missiles in Cuba. Today, a president may not have 13 minutes. Indeed, a president may not be involved at all. "Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world."
J. William Middendorf, who lives in Little Compton, served as Secretary of the Navy during the Ford administration. His recent book is "The Great Nightfall: How We Win the New Cold War." Thirteen days passed in October 1962 while President John F. Kennedy and his advisers perched at the edge of the nuclear abyss, pondering their response to the discovery of Russian missiles in Cuba. Today, a president may not have 13 minutes. Indeed, a president may not be involved at all. "Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world."
TEHRAN, Iran (AP) -- Iran's supreme leader on Saturday demanded the "definitive punishment" of those behind the killing of a scientist who led Tehran's disbanded military nuclear program, as the Islamic Republic blamed Israel for a slaying that has raised fears of reignited tensions across the Middle East. After years of being in the shadows, the image of Mohsen Fakhrizadeh suddenly was to be seen everywhere in Iranian media, as his widow spoke on state television and officials publicly demanded revenge on Israel for the scientist's slaying. Israel, long suspected of killing Iranian scientists a decade ago amid earlier tensions over Tehran's nuclear program, has yet to comment on Fakhrizadeh's killing Friday. However, the attack bore the hallmarks of a carefully planned, military-style ambush, the likes of which Israel has been accused of conducting before. The attack has renewed fears of Iran striking back against the U.S., Israel's closest ally in the region, as it did earlier this year when a U.S. drone strike killed a top Iranian general.