Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
Alpine Data Lab
Never-Ending Learning
Mitchell, Tom M. (Carnegie Mellon University) | Cohen, William (Carnegie Mellon University) | Hruschka, Estevam (University of Sao Carlos) | Talukdar, Partha (Indian Institute of Science) | Betteridge, Justin (Carnegie Mellon University) | Carlson, Andrew (Google) | Mishra, Bhavana Dalvi (Carnegien Mellon University) | Gardner, Matthew (Carnegie Mellon University) | Kisiel, Bryan (Carnegie Mellon University) | Krishnamurthy, Jayant (Carnegie Mellon University) | Lao, Ni (Google) | Mazaitis, Kathryn (Carnegie Mellon University) | Mohamed, Thahir (Carnegie Mellon University) | Nakashole, Ndapa (Carnegie Mellon University) | Platanios, Emmanouil Antonios (Ohioe State University) | Ritter, Alan (Carnegie Mellon University) | Samadi, Mehdi (Duolingo) | Settles, Burr (Carnegie Mellon University) | Wang, Richard (Carnegie Mellon University) | Wijaya, Derry (Carnegie Mellon University) | Gupta, Abhinav (Carnegie Mellon University) | Chen, Xinlei (Alpine Data Lab) | Saparov, Abulhair (Pittsburgh Supercomputer Center) | Greaves, Malcolm | Welling, Joel
Whereas people learn many different types of knowledge from diverse experiences over many years, most current machine learning systems acquire just a single function or data model from just a single data set. We propose a never-ending learning paradigm for machine learning, to better reflect the more ambitious and encompassing type of learning performed by humans. As a case study, we describe the Never-Ending Language Learner (NELL), which achieves some of the desired properties of a never-ending learner, and we discuss lessons learned. NELL has been learning to read the web 24 hours/day since January 2010, and so far has acquired a knowledge base with over 80 million confidence-weighted beliefs (e.g., servedWith(tea, biscuits) ). NELL has also learned millions of features and parameters that enable it to read these beliefs from the web. Additionally, it has learned to reason over these beliefs to infer new beliefs, and is able to extend its ontology by synthesizing new relational predicates. NELL can be tracked online at http://rtw.ml.cmu.edu, and followed on Twitter at @CMUNELL.