JAXenter: The term'intelligence' is not easy to understand. What's the best way to explain it and how can we apply it to machines? Marisa Tschopp: Human intelligence has been a very controversial topic and has undergone dramatic changes in history since the beginnings in the early 19th century. Intelligence gained importance especially in the educational context as these "mental abilities" were the best predictors for success in school and aimed to place students into the right classes. There are various, very elaborated theories, that define human intelligence.
People too often forget that IQ tests haven't been around that long. Indeed, such psychological measures are only about a century old. Early versions appeared in France with the work of Alfred Binet and Theodore Simon in 1905. However, these tests didn't become associated with genius until the measure moved from the Sorbonne in Paris to Stanford University in Northern California. There Professor Lewis M. Terman had it translated from French into English, and then standardized on sufficient numbers of children, to create what became known as the Stanford-Binet Intelligence Scale. The original motive behind these tests was to get a diagnostic to select children at the lower ends of the intelligence scale who might need special education to keep up with the school curriculum. But then Terman got a brilliant idea: Why not study a large sample of children who score at the top end of the scale?
A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.
In this paper, we introduce BINet, a neural network architecture for real-time multi-perspective anomaly detection in business process event logs. BINet is designed to handle both the control flow and the data perspective of a business process. Additionally, we propose a set of heuristics for setting the threshold of an anomaly detection algorithm automatically. We demonstrate that BINet can be used to detect anomalies in event logs not only on a case level but also on event attribute level. Finally, we demonstrate that a simple set of rules can be used to utilize the output of BINet for anomaly classification. We compare BINet to eight other state-of-the-art anomaly detection algorithms and evaluate their performance on an elaborate data corpus of 29 synthetic and 15 real-life event logs. BINet outperforms all other methods both on the synthetic as well as on the real-life datasets.
Transcranial direct current stimulation has been claimed to enhance learning.Credit: Liz Hafalia/Polaris/eyevine Is there a common element that binds diverse mental abilities, from language to mental arithmetic? Or do these skills compete for our brains' limited resources? In The Genius Within, Dav...