Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
Hoo Optimality Criteria for LMS and Backpropagation
Hassibi, Babak, Sayed, Ali H., Kailath, Thomas
This fact provides a theoretical justification of the widely observed excellent robustness properties of the LMS and backpropagation algorithms. We further discuss some implications of these results. 1 Introduction The LMS algorithm was originally conceived as an approximate recursive procedure that solves the following problem (Widrow and Hoff, 1960): given a sequence of n x 1 input column vectors {hd, and a corresponding sequence of desired scalar responses { di
Optimal Brain Surgeon: Extensions and performance comparisons
Hassibi, Babak, Stork, David G., Wolff, Gregory
We extend Optimal Brain Surgeon (OBS) - a second-order method for pruning networks - to allow for general error measures, and explore a reduced computational and storage implementation via a dominant eigenspace decomposition. Simulations on nonlinear, noisy pattern classification problems reveal that OBS does lead to improved generalization, and performs favorably in comparison with Optimal Brain Damage (OBD). We find that the required retraining steps in OBD may lead to inferior generalization, that can be interpreted as due to injecting noise backa result the system. A common technique is to stop training of a largeinto at the minimum validation error. We found that the testnetwork error could be reduced even further by means of OBS (but not OBD) pruning.
Monte Carlo Matrix Inversion and Reinforcement Learning
We describe the relationship between certain reinforcement learning (RL) methods based on dynamic programming (DP) and a class of unorthodox Monte Carlo methods for solving systems of linear equations proposed in the 1950's. These methods recast the solution of the linear system as the expected value of a statistic suitably defined over sample paths of a Markov chain. The significance of our observations lies in arguments (Curtiss, 1954) that these Monte Carlo methods scale better with respect to state-space size than do standard, iterative techniques for solving systems of linear equations. This analysis also establishes convergence rate estimates. Because methods used in RL systems for approximating the evaluation function of a fixed control policy also approximate solutions to systems of linear equations, the connection to these Monte Carlo methods establishes that algorithms very similar to TD algorithms (Sutton, 1988) are asymptotically more efficient in a precise sense than other methods for evaluating policies. Further, all DPbased RL methods have some of the properties of these Monte Carlo algorithms, that although RL is often perceived towhich suggests be slow, for sufficiently large problems, it may in fact be more efficient than other known classes of methods capable of producing the same results.
Analyzing Cross-Connected Networks
Shultz, Thomas R., Elman, Jeffrey L.
The nonlinear complexities of neural networks make network solutions difficult to understand. Sanger's contributionanalysis is here extended to the analysis of networks automatically generated by the cascadecorrelation learning algorithm. Because such networks have cross of hiddenconnections that supersede hidden layers, standard analyses contribution is defined as theunit activation patterns are insufficient. A of an output weight and the associated activation on the sendingproduct unit, whether that sending unit is an input or a hidden unit, multiplied by the sign of the output target for the current input pattern.
Estimating analogical similarity by dot-products of Holographic Reduced Representations
Gentner and Markman (1992) suggested that the ability to deal with analogy will be a "Watershed or Waterloo" for connectionist models. They identified "structural alignment" as the central aspect of analogy making. They noted the apparent ease with which people can perform structural alignment in a wide variety of tasks and were pessimistic about the of a distributed connectionist model that could be useful inprospects for the development performing structural alignment. In this paper I describe how Holographic Reduced Representations (HRRs) (Plate, 1991; Plate, 1994), a fixed-width distributed representation for nested structures, can be used to obtain fast estimates of analogical similarity.
The 1994 Florida AI Research Symposium
The 1994 Florida AI Research Symposium was held 5-7 May at Pensacola Beach, Florida. This symposium brought together researchers and practitioners in AI, cognitive science, and allied disciplines to discuss timely topics, cutting-edge research, and system development efforts in areas spanning the entire AI field. Symposium highlights included Pat Hayes's comparison of the history of AI to the history of powered flight and Clark Glymour's discussion of the prehistory of AI.