Control of the air in envisioned large-scale battles against near-peer adversaries will require revolutionary new approaches to airborne mission management, where decision authority and platform autonomy are dynamically delegated and functional roles and combat capabilities are assigned across multiple distributed tiers of platforms and human operators. System capabilities range from traditional airborne battle managers, to manned tactical aviators, to autonomous unmanned aerial systems. Due to the overwhelming complexity, human operators will require the assistance of advanced autonomy decision aids with new mechanisms for operator supervision and management of teams of manned and unmanned systems. In this paper we describe a conceptual distributed mission management approach that employs novel human-automation teaming constructs to address the complexity of envisioned operations in highly contested environments. We then discuss a cognitive engineering approach to designing roleand task-tailored human machine interfaces between humans and the autonomous systems. We conclude with a discussion of multi-level evaluation approaches for experimentation.
Four articles, published across the March through May issues of Communications, highlight how people are the unique source of the adaptive capacity essential to incident response in modern Internet-facing software systems. While it's reasonable for software engineering and operations communities to focus on the intricacies of technology, there is not much attention given to the intricacies of how people do their work. Ultimately, it is human performance that makes modern business-critical systems robust and resilient. As business-critical software systems become more successful, they necessarily increase in complexity. Ironically, this complexity makes these systems inherently messy so that surprising incidents are part and parcel of the capability to provide services at larger scales and speeds.13
The capacity to adapt can greatly influence the success of systems that need to compensate for damaged parts, learn how to achieve robust performance in new environments, or exploit novel opportunities that originate from new technological interfaces or emerging markets. Many of the conditions in which technology is required to adapt cannot be anticipated during its design stage, creating a significant challenge for the designer. Inspired by the study of a range of biological systems, we propose that degeneracy - the realization of multiple, functionally versatile components with contextually overlapping functional redundancy - will support adaptation in technologies because it effects pervasive flexibility, evolutionary innovation, and homeostatic robustness. We provide examples of degeneracy in a number of rudimentary living technologies from military socio-technical systems to swarm robotics and we present design principles - including protocols, loose regulatory coupling, and functional versatility - that allow degeneracy to arise in both biological and man-made systems.
First, through increased control automation, the human role has shifted from an emphasis on the perceptual-motor skills needed for manual control to the cognitive skills (e.g., monitoring, planning, fault management) needed for supervisory activities. Second, developments in computational technologies (z.e., heuristic programming techniques) have greatly increased the potential for automating decisions and have resulted in environments where humans interact with another, artificial, cognitive system. People are obviously cognitive systems. Developments in computational technology have focused on tool building-how to build better performing machines. But tool use involves more.
This article explores the implications of one type of cognitive technology, techniques and concepts to develop joint human-machine cognitive systems, for the application of computational technology by examining the joint cognitive system implicit in a hypothetical computer consultant that outputs some form of problem solution. This analysis reveals some of the problems can occur in cognitive system design-e.g., machine control of the interaction, the danger of a responsibility-authority double-bind, and the potentially difficult and unsupported task of filtering poor machine solutions. The result is a challenge for applied cognitive psychology to provide models, data, and techniques to help designers build an effective combination between the human and machine elements of a joint cognitive system.