Goto

Collaborating Authors

 michie


The Guardian view on bridging human and machine learning: it's all in the game

#artificialintelligence

Last week an artificial intelligence – called NooK – beat eight world champion players at bridge. That algorithms can outwit humans might not seem newsworthy. IBM's Deep Blue beat world chess champion Garry Kasparov in 1997. In 2016, Google's AlphaGo defeated a Go grandmaster. A year later the AI Libratus saw off four poker stars.


Report

AI Magazine

Subsequent tests of our data-induced flying model have b roadly confirmed the re p o rt e d results but have also identified a lack of robustness. We had undere s t i m a t e d the latter and now re g a rd our re p o rt (Michie and Camacho 1994) as being, by omission, potentially misleading. Successes and shortcomings of behavioral cloning (l e a rning by imitation) have been reviewed by Urbancic and Bratko (1994). They discuss the following problem domains: p o l e - a n d - c a rt balancing (Michie, Bain, and Michie 1990; Chambers and Michie 1969), flight-simulator c o n t rol (Sammut et al. 1992), telephone-line scheduling (Kibira 1993), and crane-simulator control (Urbancic and Bratko 1994). Conclusions a re as follows: First, successful clones have been induced using standard ML techniques in all four domains.


Menace: the Machine Educable Noughts And Crosses Engine - Chalkdust

#artificialintelligence

The use of machine learning to teach computers to play board games has had a lot of interest lately. Big companies such as Facebook and Google have both made recent breakthroughs in teaching AI the complex board game, Go. However, people have been using machine learning to teach computers board games since the mid-twentieth century. In the early 1960s Donald Michie, a British computer scientist who helped break the German Tunny code during the Second World War, came up with Menace (the Machine Educable Noughts And Crosses Engine). Menace uses 304 matchboxes all filled with coloured beads in order to learn to play noughts and crosses.


Professor Donald Michie - Telegraph

AITopics Original Links

Donald Michie was born in Rangoon on November 11 1923, the son of James Michie and the former Marjorie Crain. From Rugby he won a classical scholarship to Balliol, becoming - according to wartime colleagues - "curator of the Balliol Book of Bawdy Verse". In 1942 he was recruited to Bletchley Park. He was put into Hut F, working to crack the Wehrmacht's "Tunny" machine, which encoded material more sensitive than that carried by the now celebrated "Enigma". The team's success gave the Allies access for the first time to German army situation reports in the run-up to D-Day, with invaluable insights into troop dispositions in France.


Donald Michie, 83, Theorist of Artificial Intelligence, Dies

AITopics Original Links

Donald Michie, a versatile British scientist and early theorist of artificial intelligence who helped develop a "smart" industrial robot and then applied the technology to diverse fields, died on July 7 in Britain. Dr. Michie (pronounced MICK-ee) died in a car accident near London along with his former wife, Anne McLaren, a biologist and pioneering researcher in the field of reproduction. In the early 1970s, in work that received international attention and helped make Britain a force in advancing artificial intelligence, Dr. Michie led a team that produced "Freddy," a computer-directed robotic arm that could choose and assemble parts from a jumbled and potentially confusing array. To demonstrate Freddy's capabilities, Dr. Michie programmed the machine to put together the parts of a toy truck. Nils J. Nilsson, an emeritus professor of engineering at Stanford University and a former chairman of the department of computer science there, said the machine was "ahead of its time" and impressed researchers at Stanford and elsewhere as "one of the first automatic assembly systems in the world."



8 A Theory of Advice bonald Michie

AI Classics

Machine intelligence problems are sometimes defined as those problems which (i) computers can't yet do, and (ii) humans can. We shall further consider how much "knowledge" about a finite mathematical function can, on certain assumptions, be credited to a computer program. Although our approach is quite general, we are really only interested in programs which evaluate "semi-hard" functions, believing that the evaluation of such functions constitutes the defining aspiration of machine intelligence work. If a function is less hard than "semi-hard," then we can evaluate it by pure algorithm (trading space for time) or by pure look-up (making the opposite trade), with no need to talk of knowledge, advice, machine intelligence, or any of those things. We call such problems "standard." If however the function is "semi-hard," then we will be driven to construct some form of artful compromise between the two representations: without such a compromise the function will not be evaluable within practical resource limits. If the function is harder than "semi-hard," i.e. is actually "hard," then no amount of compromise can ever make feasible its evaluation by any terrestrial device. "Hard" problems In a recent lecture Knuth (1976) called attention to the notion of a "hard" problem as one for which solutions are computable in the theoretical sense but 151 MEASUREMENT OF KNOWLEDGE For illustration he referred to the task, studied by Meyer and Stockmeyer, of determining the truth-values of statements about whole numbers expressed in a restricted logical symbolism, for example Vx Vy(y. But is the problem nevertheless in some important sense "hard?" Meyer and Stockmeyer showed that if we allow input expressions to be as long as only 617 symbols then the answer is "yes," reckoning "hardness" as follows: find an evaluation algorithm expressed as an electrical network of gates and registers such as to minimise the number of components; if this number exceeds the number of elementary particles in the observable Universe (say, 10125), then the problem is "hard."


7 Dynamic Probability, Computer Chess, and the Measurement of Knowledge* I. J. Good

AI Classics

Virginia Polytechnic Institute and State University Blacksburg, Virginia Philosophers and - "pseudognosticians" (the artificial intelligentsial) are coming more and more to recognize that they share common ground and that each can learn from the other. This has been generally recognized for many years as far as symbolic logic is concerned, but less so in relation to the foundations of probability. In this essay I hope to convince the pseudognostician that the philosophy of probability is relevant to his work. One aspect that I could have discussed would have been probabilistic causality (Good, 1961/62), in view of Hans Berliner's forthcoming paper "Inferring causality in tactical analysis", but my topic here will be mainly dynamic probability. The close relationship between philosophy and pseudognostics is easily understood, for philosophers often try to express as clearly as they can how people make judgments. To parody Wittgenstein, what can be said at all can be said clearly and it can be programmed A paradox might seem to arise. Formal systems, such as those used in mathematics, logic, and computer programming, can lead to deductions outside the system only when there is an input of assumptions. For example, no probability can be numerically inferred from the axioms of probability unless some probabilities are assumed without using the axioms: ex nihilo nihil fit.2 This leads to the main controversies in the foundations of statistics: the controversies of whether intuitive probability3 should be used in statistics and, if so, whether it should be logical probability (credibility) or subjective (personal).


NEW DEVELOPMENTS OF THE GRAPH TRAVERSER

AI Classics

INTRODUCTION This paper describes some recent experiments with a computer program which is capable of useful, or at least interesting, application to a number of different problems. The program, the Graph Traverser, has been described in detail in a previous paper (Doran & Michie 1966). However, we shall here need to view the basic algorithm from a rather more general standpoint, corresponding to an actual extension in the flexibility of the program, so that a restatement of what the program can do is desirable. The Graph Traverser, which is written in Elliott 4100 Algol, is potentially applicable to problem situations which can be idealised in the following way (see for comparison Newell and Ernst 1965). There is given a set of'states', which are connected by a set of'transformations', or, as I shall call them, 'operators'. An operator will be applicable to some, but not necessarily all, of the states and two distinct operators applied to either the same or distinct states may each give the same state as end-product. Most of the concepts to be used here which are related to the use of operators were discussed in a paper by Michie (1967). This type of problem situation is represented in Figure 1 by a graph (in the mathematical sense) to which have been added various labels. In this representation states correspond to nodes of the graph, and operators to labelled arcs--a, b, c in this quite arbitrary case. Notice that associated with each node (or state) is a triad of integers.


MACHINE INTELLIGENCE 13

AI Classics

OXFORD 1994 Oxford University Press, Walton Street, Oxford 0X2 6DP Oxford New York Athens Auckland Bangkok Bombay Calcutta Cape Town Dar es Salaam Delhi Florence Hong Kong Istanbul Karachi Kuala Lumpur Madras Madrid Melbourne Mexico City Nairobi Paris Singapore Taipei Tokyo Toronto and associated companies in Berlin lbadan Published in the United States by Oxford University Press Inc., New York 0 E. K. Furukawa, D. Michie, and S. Muggleton, 1994 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press. Enquiries concerning reproduction outside those terms and in other countries should be sent to the Rights Department, Oxford University Press, at the address above. This book is sold subject to the condition that it shall not, by way of trade or otherwise, be lent, re-sold, hired out, or otherwise circulated without the publisher's prior consent in any form of binding or cover other than that in which it is published and without a similar condition including this condition being imposed on the subsequent purchaser. The founder of modern computational logic, J.A. Robinson, opens this volume with a chapter on the field's great forefathers John von Neumann and Alan Turing.