aerospace & defense

The Machine Learning Toolbox: For Non-Mathematicians: Dr. Brian Letort: 9781794302686: Books


Dr. Daniel "Brian" Letort is a Fellow and Chief Data Scientist at Northrop Grumman Corporation. He has held various roles in his 18 year tenure, which have spanned software engineering, systems engineering, systems architecture, and chief architect. Throughout the roles, his interest have surrounded the strategic and forward-thinking use of data. Additionally, Brian serves as an adjunct instructor at both Colorado Tech and Southern New Hampshire University. Additionally, he serves as a lead faculty at Southern New Hampshire University.

Deep learning model from Lockheed Martin tackles satellite image analysis


The model, Global Automated Target Recognition (GATR), runs in the cloud, using Maxar Technologies' Geospatial Big Data platform (GBDX) to access Maxar's 100 petabyte satellite imagery library and millions of curated data labels across dozens of categories that expedite the training of deep learning algorithms. Fast GPUs enable GATR to scan a large area very quickly, while deep learning methods automate object recognition and reduce the need for extensive algorithm training. The tool teaches itself what the identifying characteristics of an object area or target, for example, learning how to distinguish between a cargo plane and a military transport jet. The system then scales quickly to scan large areas, such as entire countries. GATR uses common deep learning techniques found in the commercial sector and can identify airplanes, ships,, buildings, seaports, etc. "There's more commercial satellite data than ever available today, and up until now, identifying objects has been a largely manual process," says Maria Demaree, vice president and general manager of Lockheed Martin Space Mission Solutions.

Aerospace & Defense Industry to See Greatest Impact from Artificial Intelligence Compared to Other Key Emerging Technologies, Accenture Report Finds


Aerospace & Defense Industry to See Greatest Impact from Artificial Intelligence Compared to Other Key Emerging Technologies, Accenture Report Finds Study underscores the need for reskilling in the sector for future competitiveness NEW YORK; June 13, 2019 – The aerospace and defense (A&D) industry will be more affected by artificial intelligence (AI) than by any other major emerging technology over the next three years, according to Aerospace & Defense Technology Vision 2019, the annual report from Accenture (NYSE: ACN) that predicts key technology trends likely to redefine business. The study also underscores the growing importance of reskilling programs as a competitive lever. AI, comprising technologies that range from machine learning to natural language processing, enables machines to sense, comprehend, act and learn in order to extend human capabilities. One-third (33%) of A&D executives surveyed cited AI as the technology that will have the greatest impact on their organization over the next three years -- more than quantum computing, distributed ledger or extended reality. In fact, two-thirds (67%) of A&D executives said they have either adopted AI within their business or are piloting the technology.

Artificial intelligence needs guardrails


With the recent launch of the website as "Artificial Intelligence for the American People," AI will clearly be an integral part of our future. While some may still wonder, "what can AI do for us?," many more may be asking, "what can AI do to us?" given some recent tragic events. The crashes of the Boeing 737 MAXs and Uber and Tesla's self-driving car fatalities point to AI's unintended consequences and highlight how technologists as well as users of AI have both fallen short at making proper guardrails in deploying AI technology. People often think of AI as the panacea that will enable technology to solve our most pressing problems. In that way, AI brings to mind a seeming panacea of an earlier age: aspirin.

BAE Systems Partners with UiPath to Expedite Machine Learning Adoption across the U.S. Defense and Intelligence Communities


BAE Systems is a technology partner with robotic process automation (RPA) leader, UiPath, in developing suites of software robots that its customers can use to automate high-volume, repetitive business processes. This press release features multimedia. BAE Systems is now a technology partner with robotic process automation leader, UiPath, to integrate machine learning capabilities into defense and intelligence community programs. "RPAs fuel machine learning tools by feeding them high volumes of structured data necessary for it to begin learning and improving automatically, without being programmed," said Don DeSanto, director of strategic partnerships for the BAE Systems Intelligence & Security sector. "Human-machine teaming is the future of technology, and RPAs serve as workforce multipliers that can be designed to automate many common tasks performed in organizations every day."

Researchers develop 'neural lander' to land drones smoothly


The system was created by Caltech's Center for Autonomous Systems and Technologies (CAST) in a collaboration between artificial intelligence (AI) and control experts. The "neural lander", is a learning-based controller which tracks the position and speed of the drone, and modifies its landing trajectory and rotor speed accordingly to achieve the smoothest possible landing. "This project has the potential to help drones fly more smoothly and safely, especially in the presence of unpredictable wind gusts, and eat up less battery power as drones can land more quickly," said Soon-Jo Chung, a professor of Aerospace at the institute. For many experts developing unmanned aerial vehicles, landing multi-rotor drones smoothly remains a challenge. This is due to complex turbulence being created by the airflow from each rotor bouncing off the ground as the ground grows ever closer during a descent.

Facebook patents high-tech drone that uses kites to stay in the air for long periods of time

Daily Mail - Science & tech

Facebook has patented a high-tech drone that uses a unique apparatus to stay afloat. The filing, titled'Dual-kite aerial vehicle,' describes an unmanned aerial vehicle that is attached to two kites and can be flown at different altitudes. The kites allow the drone to remain in the air for an extended period of time'while consuming little or no fuel,' according to the patent. Facebook has patented a high-tech drone that uses a unique apparatus to stay afloat. The filing, 'Dual-kite aerial vehicle,' describes an unmanned aerial vehicle tethered to two kites The drone is attached to the two kites via a tether, which are each able to maintain flight at different altitudes.

General Dynamic Neural Networks for explainable PID parameter tuning in control engineering: An extensive comparison Machine Learning

Automation, the ability to run processes without human supervision, is one of the most important drivers of increased scalability and productivity. Modern automation largely relies on forms of closed loop control, wherein a controller interacts with a controlled process via actions, based on observations. Despite an increase in the use of machine learning for process control, most deployed controllers still are linear Proportional-Integral-Derivative (PID) controllers. PID controllers perform well on linear and near-linear systems but are not robust enough for more complex processes. As a main contribution of this paper, we examine the utility of extending standard PID controllers with General Dynamic Neural Networks (GDNN); we show that GDNN (neural) PID controllers perform well on a range of control systems and highlight what is needed to make them a stable, scalable, and interpretable option for control. To do so, we provide a comprehensive study using four different benchmark processes. All control environments are evaluated with and without noise as well as with and without disturbances. The neural PID controller performs better than standard PID control in 15 of 16 tasks and better than model-based control in 13 of 16 tasks. As a second contribution of this work, we address the Achilles heel that prevents neural networks from being used in real-world control processes so far: lack of interpretability. We use bounded-input bounded-output stability analysis to evaluate the parameters suggested by the neural network, thus making them understandable for human engineers. This combination of rigorous evaluation paired with better explainability is an important step towards the acceptance of neural-network-based control approaches for real-world systems. It is furthermore an important step towards explainable and safe applied artificial intelligence.

SpaceX satellites pose new headache for astronomers

The Japan Times

WASHINGTON - It looked like a scene from a sci-fi blockbuster: An astronomer in the Netherlands captured footage of a train of brightly lit SpaceX satellites ascending through the night sky last week, stunning space enthusiasts across the globe. But the sight has also provoked an outcry among astronomers who say the constellation, which so far consists of 60 broadband-beaming satellites but could one day grow to as many as 12,000, may threaten our view of the cosmos and deal a blow to scientific discovery. The launch was tracked around the world and it soon became clear that the satellites were visible to the naked eye: a new headache for researchers who already have to find workarounds to deal with objects cluttering their images of deep space. "People were making extrapolations that if many of the satellites in these new mega-constellations had that kind of steady brightness, then in 20 years or less, for a good part the night anywhere in the world, the human eye would see more satellites than stars," said Bill Keel, an astronomer at the University of Alabama. The satellites' brightness has since diminished as their orientation has stabilized and they have continued their ascent to their final orbit at an altitude of 550 kilometers (340 miles).

Deep Reinforcement Learning for Event-Driven Multi-Agent Decision Processes Artificial Intelligence

The incorporation of macro-actions (temporally extended actions) into multi-agent decision problems has the potential to address the curse of dimensionality associated with such decision problems. Since macro-actions last for stochastic durations, multiple agents executing decentralized policies in cooperative environments must act asynchronously. We present an algorithm that modifies generalized advantage estimation for temporally extended actions, allowing a state-of-the-art policy optimization algorithm to optimize policies in Dec-POMDPs in which agents act asynchronously. We show that our algorithm is capable of learning optimal policies in two cooperative domains, one involving real-time bus holding control and one involving wildfire fighting with unmanned aircraft. Our algorithm works by framing problems as "event-driven decision processes," which are scenarios in which the sequence and timing of actions and events are random and governed by an underlying stochastic process. In addition to optimizing policies with continuous state and action spaces, our algorithm also facilitates the use of event-driven simulators, which do not require time to be discretized into time-steps. We demonstrate the benefit of using event-driven simulation in the context of multiple agents taking asynchronous actions. We show that fixed time-step simulation risks obfuscating the sequence in which closely separated events occur, adversely affecting the policies learned. In addition, we show that arbitrarily shrinking the time-step scales poorly with the number of agents.