AI can predict autism through babies' brain scans

#artificialintelligence 

Oxford Winter Intelligence - Abstract: In this paper we will address an important issue of reward function integrity in artificially intelligent systems. Throughout the paper, we will analyze historical examples of wireheading in man and machine and evaluate a number of approaches proposed for dealing with reward-function corruption. While simplistic optimizers driven to maximize a proxy measure for a particular goal will always be a subject to corruption, sufficiently rational self-improving machines are believed by many to be safe from wireheading problems. Claims are often made that such machines will know that their true goals are different from the proxy measures, utilized to represent the progress towards goal achievement in their fitness functions, and will choose not to modify their reward functions in a way which does not improve chances for the true goal achievement. Likewise, supposedly such advanced machines will choose to avoid corrupting other system components such as input sensors, memory, internal and external communication channels, CPU architecture and software modules.