Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
Tokyo Institute of Technology and JST PRESTO
Dealing with Infinite Loops, Underestimation, and Overestimation of Depth-First Proof-Number Search
Kishimoto, Akihiro (Tokyo Institute of Technology and JST PRESTO)
Depth-first proof-number search (df-pn) is powerful AND/OR tree search to solve positions in games. However, df-pn has a notorious problem of infinite loops when applied to domains with repetitions. Df-pn(r) cures it by ignoring proof and disproof numbers that may lead to infinite loops. This paper points out that df-pn(r) has a serious issue of underestimating proof and disproof numbers, while it also suffers from the overestimation problem occurring in directed acyclic graph. It then presents two practical solutions to these problems. While bypassing infinite loops, the threshold controlling algorithm (TCA) solves the underestimation problem by increasing the thresholds of df-pn. The source node detection algorithm (SNDA) detects the cause of overestimation and modifies the computation of proof and disproof numbers. Both TCA and SNDA are implemented on top of df-pn to solve tsume-shogi (checkmating problem in Japanese chess). Results show that df-pn with TCA and SNDA is far superior to df-pn(r). Our tsume-shogi solver is able to solve several difficult positions previously unsolved by any other solvers.
Scalable, Parallel Best-First Search for Optimal Sequential Planning
Kishimoto, Akihiro (Tokyo Institute of Technology and JST PRESTO) | Fukunaga, Alex (Tokyo Institute of Technology) | Botea, Adi (NICTA and The Australian National University)
Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate parallel algorithms for optimal sequential planning, with an emphasis on exploiting distributed memory computing clusters. In particular, we focus on an approach which distributes and schedules work among processors based on a hash function of the search state. We use this approach to parallelize the A* algorithm in the optimal sequential version of the Fast Downward planner. The scaling behavior of the algorithm is evaluated experimentally on clusters using up to 128 processors, a significant increase compared to previous work in parallelizing planners. We show that this approach scales well, allowing us to effectively utilize the large amount of distributed memory to optimally solve problems which require hundreds of gigabytes of RAM to solve. We also show that this approach scales well for a single, shared-memory multicore machine.