level manager
Design considerations for a hierarchical semantic compositional framework for medical natural language understanding
Taira, Ricky K., Garlid, Anders O., Speier, William
Medical natural language processing (NLP) systems are a key enabling technology for transforming Big Data from clinical report repositories to information used to support disease models and validate intervention methods. However, current medical NLP systems fall considerably short when faced with the task of logically interpreting clinical text. In this paper, we describe a framework inspired by mechanisms of human cognition in an attempt to jump the NLP performance curve. The design centers about a hierarchical semantic compositional model (HSCM) which provides an internal substrate for guiding the interpretation process. The paper describes insights from four key cognitive aspects including semantic memory, semantic composition, semantic activation, and hierarchical predictive coding. We discuss the design of a generative semantic model and an associated semantic parser used to transform a free-text sentence into a logical representation of its meaning.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- (26 more...)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Health Care Technology > Medical Record (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- (3 more...)
Feudal Reinforcement Learning
Dayan, Peter, Hinton, Geoffrey E.
One way to speed up reinforcement learning is to enable learning to happen simultaneously at multiple resolutions in space and time. This paper shows how to create a Q-Iearning managerial hierarchy how to set tasks to their submanagersin which high level managers learn how to satisfy them. Sub-managerswho, in turn, learn understand their managers' commands. Theyneed not initially simply learn to maximise their reinforcement in the context of the current command. We illustrate the system using a simple maze task .. As the system learns how to get around, satisfying commands at the multiple than standard, flat, Q-Iearninglevels, it explores more efficiently and builds a more comprehensive map. 1 INTRODUCTION Straightforward reinforcement learning has been quite successful at some relatively thecomplex tasks like playing backgammon (Tesauro, 1992).
- North America > Canada > Ontario > Toronto (0.15)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > California > San Mateo County > San Mateo (0.05)
- (4 more...)
Feudal Reinforcement Learning
Dayan, Peter, Hinton, Geoffrey E.
One way to speed up reinforcement learning is to enable learning to happen simultaneously at multiple resolutions in space and time. This paper shows how to create a Q-Iearning managerial hierarchy in which high level managers learn how to set tasks to their submanagers who, in turn, learn how to satisfy them. Sub-managers need not initially understand their managers' commands. They simply learn to maximise their reinforcement in the context of the current command. We illustrate the system using a simple maze task.. As the system learns how to get around, satisfying commands at the multiple levels, it explores more efficiently than standard, flat, Q-Iearning and builds a more comprehensive map. 1 INTRODUCTION Straightforward reinforcement learning has been quite successful at some relatively complex tasks like playing backgammon (Tesauro, 1992).
- North America > Canada > Ontario > Toronto (0.15)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > California > San Mateo County > San Mateo (0.05)
- (4 more...)
Feudal Reinforcement Learning
Dayan, Peter, Hinton, Geoffrey E.
One way to speed up reinforcement learning is to enable learning to happen simultaneously at multiple resolutions in space and time. This paper shows how to create a Q-Iearning managerial hierarchy in which high level managers learn how to set tasks to their submanagers who, in turn, learn how to satisfy them. Sub-managers need not initially understand their managers' commands. They simply learn to maximise their reinforcement in the context of the current command. We illustrate the system using a simple maze task.. As the system learns how to get around, satisfying commands at the multiple levels, it explores more efficiently than standard, flat, Q-Iearning and builds a more comprehensive map. 1 INTRODUCTION Straightforward reinforcement learning has been quite successful at some relatively complex tasks like playing backgammon (Tesauro, 1992).
- North America > Canada > Ontario > Toronto (0.15)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > California > San Mateo County > San Mateo (0.05)
- (4 more...)