Goto

Collaborating Authors

Everyday bat vocalizations contain information about emitter, addressee, context, and behavior

#artificialintelligence

Many animal species use vocal communication1, but the chaotic nature of the acoustics of these vocalizations often complicates their cataloging into clearly divided types and contexts2. Understanding the encapsulated information in animal vocalizations is central to the study of sociality, communication, and language evolution. Yet, in the research of nonhuman animals, the purpose and meaning of the vocal message often remain obscure. Researchers of animal communication, seeking homology to semantics, may relate behavioral observations to acoustic measurements, and thus reveal some of the information content of vocal expressions. Indeed, several studies have described cases of vocalizations as functionally referential, i.e. signals which are both specific to a certain context and elicit an appropriate response by a receiver3,4,5.


ORCA-SPOT: An Automatic Killer Whale Sound Detection Toolkit Using Deep Learning

#artificialintelligence

Large bioacoustic archives of wild animals are an important source to identify reappearing communication patterns, which can then be related to recurring behavioral patterns to advance the current understanding of intra-specific communication of non-human animals. A main challenge remains that most large-scale bioacoustic archives contain only a small percentage of animal vocalizations and a large amount of environmental noise, which makes it extremely difficult to manually retrieve sufficient vocalizations for further analysis – particularly important for species with advanced social systems and complex vocalizations. In this study deep neural networks were trained on 11,509 killer whale (Orcinus orca) signals and 34,848 noise segments. The resulting toolkit ORCA-SPOT was tested on a large-scale bioacoustic repository – the Orchive – comprising roughly 19,000 hours of killer whale underwater recordings. An automated segmentation of the entire Orchive recordings (about 2.2 years) took approximately 8 days.


AISB'01 Symposium - Emotion, cognition, and affective computing.

AITopics Original Links

Examples include creating systems which respond in an emotionally-aware way to the user, systems which modify their behaviour in response to affective cues, et cetera. Examples include how emotion might be added to virtual environments, the effect of affect on computer-mediated learning environments or collaborative computer games. Examples include attempts to add emotions to AI-systems, attempts to simulate emotions in computers, systems that are inspired by the physiological basis of emotion, studies which aim to understand the role that affective processes play in reasoning, and computational slants on theories of emotion. In particular studies which use computer simulation to understand affective processes. Examples include creating systems which respond in an emotionally-aware way to the user, systems which modify their behaviour in response to affective cues, et cetera.



Wearable Tetherless Computer-Mediated Reality: WearCamas a wearable face-recognizer, and other applications for the disabled

AAAI Conferences

In this paper, WearCam is presented as a prosthetic device. In particular, two example applications: the'personal visual assistant'; and the'visual memory prosthetlc' are described. Tlle'personal visual assistant' embodies a spatial visual filter(Mann 1994a) that rcconfigures the human visual system, providing a coordinate transformation (remapping of spatial coordinates). Such coordinate transformations, it is hoped, might someday be of use to the partially sighted.