Not enough data to create a plot.
Try a different view from the menu above.
Ring, Mark
Representing Knowledge as Predictions (and State as Knowledge)
Ring, Mark
This paper shows how a single mechanism allows knowledge to be constructed layer by layer directly from an agent's raw sensorimotor stream. This mechanism, the General Value Function (GVF) or "forecast," captures high-level, abstract knowledge as a set of predictions about existing features and knowledge, based exclusively on the agent's low-level senses and actions. Thus, forecasts provide a representation for organizing raw sensorimotor data into useful abstractions over an unlimited number of layers--a long-sought goal of AI and cognitive science. The heart of this paper is a detailed thought experiment providing a concrete, step-by-step formal illustration of how an artificial agent can build true, useful, abstract knowledge from its raw sensorimotor experience alone. The knowledge is represented as a set of layered predictions (forecasts) about the agent's observed consequences of its actions. This illustration shows twelve separate layers: the lowest consisting of raw pixels, touch and force sensors, and a small number of actions; the higher layers increasing in abstraction, eventually resulting in rich knowledge about the agent's world, corresponding roughly to doorways, walls, rooms, and floor plans. I then argue that this general mechanism may allow the representation of a broad spectrum of everyday human knowledge.
Recurrent Transition Hierarchies for Continual Learning: A General Overview
Ring, Mark (IDSI / SUPSI / University of Lugano)
Continual learning is the unending process of learning new things on top of what has already been learned (Ring, 1994).Temporal Transition Hierarchies (TTHs) were developed to allow prediction of Markov-k sequences in a way that was consistent with the needs of a continual-learning agent (Ring, 1993).However, the algorithm could not learn arbitrary temporal contingencies.This paper describes Recurrent Transition Hierarchies (RTH), a learning method that combines several properties desirable for agents that must learn as they go.In particular, it learns online and incrementally, autonomously discovering new features as learning progresses.It requires no reset or episodes.It has a simple learning rule with update complexity linear in the number of parameters.
RCC Cannot Compute Certain FSA, Even with Arbitrary Transfer Functions
Ring, Mark
The proof given here shows that for any finite, discrete transfer function used by the units of an RCC network, there are finite-state automata (FSA) that the network cannot model, no matter how many units are used. The proof also applies to continuous transfer functions with a finite number of fixed-points, such as sigmoid and radial-basis functions.
RCC Cannot Compute Certain FSA, Even with Arbitrary Transfer Functions
Ring, Mark
The proof given here shows that for any finite, discrete transfer function used by the units of an RCC network, there are finite-state automata (FSA) that the network cannot model, no matter how many units are used. The proof also applies to continuous transfer functions with a finite number of fixed-points, such as sigmoid and radial-basis functions.
Learning Sequential Tasks by Incrementally Adding Higher Orders
Ring, Mark
An incremental, higher-order, non-recurrent network combines two properties found to be useful for learning sequential tasks: higherorder connections and incremental introduction of new units. The network adds higher orders when needed by adding new units that dynamically modify connection weights. Since the new units modify the weights at the next time-step with information from the previous step, temporal tasks can be learned without the use of feedback, thereby greatly simplifying training. Furthermore, a theoretically unlimited number of units can be added to reach into the arbitrarily distant past. Experiments with the Reber grammar have demonstrated speedups of two orders of magnitude over recurrent networks.
Learning Sequential Tasks by Incrementally Adding Higher Orders
Ring, Mark
An incremental, higher-order, non-recurrent network combines two properties found to be useful for learning sequential tasks: higherorder connectionsand incremental introduction of new units. The network adds higher orders when needed by adding new units that dynamically modify connection weights. Since the new units modify theweights at the next time-step with information from the previous step, temporal tasks can be learned without the use of feedback, thereby greatly simplifying training. Furthermore, a theoretically unlimitednumber of units can be added to reach into the arbitrarily distant past.
Learning Sequential Tasks by Incrementally Adding Higher Orders
Ring, Mark
An incremental, higher-order, non-recurrent network combines two properties found to be useful for learning sequential tasks: higherorder connections and incremental introduction of new units. The network adds higher orders when needed by adding new units that dynamically modify connection weights. Since the new units modify the weights at the next time-step with information from the previous step, temporal tasks can be learned without the use of feedback, thereby greatly simplifying training. Furthermore, a theoretically unlimited number of units can be added to reach into the arbitrarily distant past. Experiments with the Reber grammar have demonstrated speedups of two orders of magnitude over recurrent networks.