For the past several years research on robot problem-solving methods has centered on what may one day be called'simple' plans: linear sequences of actions to be performed by single robots to achieve single goals in static environments. This process of forming new subgoals and new states continues until a state is produced in which the original goal is provable; the sequence of operators producing that state is the desired solution. In the case of a single goal wff, the objective is quite simple: achieve the goal (possibly while minimizing some combination of planning and execution cost). The objective of the system is to achieve the single positive goal (perhaps while minimizing search and execution costs) while avoiding absolutely any state satisfying the negative goal.
The Inductive Net is made up of a set of horizontal lines (input lines) crossing at right angles a set of vertical lines (output lines), binary switches being placed at the intersections so formed. Each possible feature value has one horizontal line and one vertical line identified with it. A pattern is stored in the Net by exciting the horizontal lines and the vertical lines identified with its feature values, and turning on each switch which receives excitation along both the lines on which it is placed. We do this by giving the Inductive Net additional horizontal lines, each of which has a mask placed on the front of it which causes the line to fire when a particular combination of feature values occur together in the input pattern.
Abstract: PLANNER is a formalism for proving theorems and manipulating models in a robot. The formalism is built out of a number of problem-solving primitives together with a hierarchical multiprocess backtrack control structure. Under BACKTRACK control structure, the hierarchy of activations of functions previously executed is maintained so that it is possible to revert to any previous state. In addition PLANNER uses multiprocessing so that there can be multiple loci of control over the problem-solving.
We describe the analysis of visual scenes consisting of black on white drawings formed with curved lines, depicting familiar objects and forms: houses, trees, persons, and so on; for instance, drawings found in coloring books. The analysis of these line drawings is an instance of'the context problem', which can be stated as'given that a set (a scene) is formed by components that locally (by their shape) are ambiguous, because each shape allows a component to have one of several possible values (a circle can be sun, ball, eye, hole) or meanings, can we make use of context information stated in the form of models, in order to single out for each component a value in such manner that the whole set (scene) is consistent or makes global sense?' This paper proposes a way to solve'the context problem' in the paradigm of coloring book drawings. The problem we are trying to solve is the Context Problem, which can be stated in general words as'given that a set (a scene) is formed by components that locally (by their shape) are ambiguous, because they can have one of several possible values (a circle sun, ball, eye, hole) or meanings, can we make use of By analyzing each component, we come to several possible interpretations of such component, and further disambiguation is possible only by using global information (information derived from several components, or by the interconnection or interrelation between two or more components), under the assumption that the scene as a whole'makes global sense' or is'consistent'.
The outline is drawn of a hypothetical machine to recognise speech, comprising a basic recogniser working on short segments of acoustic waveform only, on to which may be added further structures to use knowledge of speaker characteristics, speech statistics, syntax rules, and semantics, in order to improve the recognition performance. Suppose one tried to implement a recogniser by telling the machine to store every new pattern it encountered together with a label telling it what word or words the pattern represented, with the intention of recognising an arbitrarily large vocabulary for an arbitrarily large proportion of the total population of speakers. The fifth section will describe briefly some work which is being carried out at Standard Telecommunication Laboratories towards implementing a real machine, and the final section will contain conclusions. If, as is highly probable for ASR, the speech is transmitted through a telephone link the problems of noise and distortion can be quite severe and include noises due to handling the handset, clicks and hisses in the speech band, limitation of the bandwidth to the range between 300 and 3400 cps, and pre-emphasis of the signal.
The machines (usually digital computers) will classify simple shapes and printed letters and digits represented (by means of a suitable television scanner) as a matrix of I s and Os. Psychologists concerned with analysing complex behaviour often have to select an appropriate level of description. Let us confine our attention to three levels: words, phrases, sentences. Figure 1 shows a set of rules subdivided into numbered groups (1, 2, 3,..., 6).
The program, which is written in the LISP language, uses heuristic methods to calculate, from relatively primitive representations of the input figures, descriptions of these figures in terms of subfigures and relations among them. It then utilizes these descriptions to find an appropriate rule and to apply it, modified as necessary, to arrive at an answer. The program solved a large number of such problems, including many taken directly from college-level intelligence tests. The novel organization of the program in terms of figure descriptions, which are analyzed to find transformation rules, and rule descriptions, which are analyzed to find'common generalizations' of pairs of transformation rules, has implications for the design of problem-solving programs and for machine learning.
What is needed is a discipline which will study semantic message-connection in a way analogous to that in which metamathematics studies mathematical connection, and to that in which mathematical linguistics now studies syntactic connection. Research Used as Data for the Construction of T (a) Conceptual Dictionary for English The uses of the main words and phrases of English are mapped on to a classificatory system of about 750 descriptors, or heads, these heads being streamlined from Roget's Thesaurus. For Instance, a single card covers Disappoint, Disappointed, Disappointing, Disappointment. The two connectives, / ("slash") and: ("colon") and a word-order rule are used as in T to replace R.H. Richens' three subscripts, and every two pairs of elements are bracketted together, two bracketted pairs of elements counting as a single pair for the purpose of forming 2nd order brackets.
... The literature does not include any general discussion of the outstanding problems of this field. In this article, an attempt will be made to separate out, analyze, and find the relations between some of these problems. Analysis will be supported with enough examples from the literature to serve the introductory function of a review article, but there remains much relevant work not described here.Proc. Institute of Radio Engineers 49, p. 8-30
Their role in mathematics presents an interesting counterpart to certain functional aspects of organization in nature. In comparing living organisms, and, in particular, that most complicated organism, the human central nervous system, with artificial automata, the following limitation should be kept in mind. The simplest way to estimate this degree of complexity is, instead of counting decimal places, to count the number of places that would be required for the same precision in the binary system of notation (base 2 instead of base 10). Such cumulative, repetitive procedures may, for instance, increase the size of the result, that is (and this is the important consideration), increase the significant result, the "signal," re