Many animal species use vocal communication1, but the chaotic nature of the acoustics of these vocalizations often complicates their cataloging into clearly divided types and contexts2. Understanding the encapsulated information in animal vocalizations is central to the study of sociality, communication, and language evolution. Yet, in the research of nonhuman animals, the purpose and meaning of the vocal message often remain obscure. Researchers of animal communication, seeking homology to semantics, may relate behavioral observations to acoustic measurements, and thus reveal some of the information content of vocal expressions. Indeed, several studies have described cases of vocalizations as functionally referential, i.e. signals which are both specific to a certain context and elicit an appropriate response by a receiver3,4,5.
Large bioacoustic archives of wild animals are an important source to identify reappearing communication patterns, which can then be related to recurring behavioral patterns to advance the current understanding of intra-specific communication of non-human animals. A main challenge remains that most large-scale bioacoustic archives contain only a small percentage of animal vocalizations and a large amount of environmental noise, which makes it extremely difficult to manually retrieve sufficient vocalizations for further analysis – particularly important for species with advanced social systems and complex vocalizations. In this study deep neural networks were trained on 11,509 killer whale (Orcinus orca) signals and 34,848 noise segments. The resulting toolkit ORCA-SPOT was tested on a large-scale bioacoustic repository – the Orchive – comprising roughly 19,000 hours of killer whale underwater recordings. An automated segmentation of the entire Orchive recordings (about 2.2 years) took approximately 8 days.
Examples include creating systems which respond in an emotionally-aware way to the user, systems which modify their behaviour in response to affective cues, et cetera. Examples include how emotion might be added to virtual environments, the effect of affect on computer-mediated learning environments or collaborative computer games. Examples include attempts to add emotions to AI-systems, attempts to simulate emotions in computers, systems that are inspired by the physiological basis of emotion, studies which aim to understand the role that affective processes play in reasoning, and computational slants on theories of emotion. In particular studies which use computer simulation to understand affective processes. Examples include creating systems which respond in an emotionally-aware way to the user, systems which modify their behaviour in response to affective cues, et cetera.
The use of information and communication technologies (ICTs) by individuals is a long-time concern for researchers and practitioners. ICT use starts with the inclusion of people in the digital society and progresses toward the equalization of their capabilities and opportunities in technology-mediated information and communication processes. Approaches to inclusion and equality have become increasingly sophisticated through developments in human-centered computing and human-computer interaction that replace the old focus on people's mere access to the ICTs. At the same time, a third, more empowering moment of ICT use is attracting scholars, professionals and, hopefully, public agents--the effectiveness with which people use the technology. In this article, I discuss inclusion, equality and effectiveness under the concept of one's digital effectiveness.