Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
Generative Forests
We focus on generative AI for a type of data that still represent one of the most prevalent form of data: tabular data. Our paper introduces two key contributions: a new powerful class of forest-based models fit for such tasks and a simple training algorithm with strong convergence guarantees in a boosting model that parallels that of the original weak / strong supervised learning setting. This algorithm can be implemented by a few tweaks to the most popular induction scheme for decision tree induction (i.e.
FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding Dong Jing
Contrastive Language-Image Pre-training (CLIP) achieves impressive performance on tasks like image classification and image-text retrieval by learning on largescale image-text datasets. However, CLIP struggles with dense prediction tasks due to the poor grasp of the fine-grained details. Although existing works pay attention to this issue, they achieve limited improvements and usually sacrifice the important visual-semantic consistency. To overcome these limitations, we propose FineCLIP, which keeps the global contrastive learning to preserve the visualsemantic consistency and further enhances the fine-grained understanding through two innovations: 1) A real-time self-distillation scheme that facilitates the transfer of representation capability from global to local features.
Emmanuel Abbe
Can Transformers predict new syllogisms by composing established ones? More generally, what type of targets can be learned by such models from scratch? Recent works show that Transformers can be Turing-complete in terms of expressivity, but this does not address the learnability objective. This paper puts forward the notion of globality degree of a target distribution to capture when weak learning is efficiently achievable by regular Transformers.
Entropic Desired Dynamics for Intrinsic Control: Supplemental Material
While the focus of this work has been on unsupervised evaluation, here we provide a proof-of-concept that EDDICT can aid in the achievement of task rewards via structured exploration. To do so, we train EDDICT alongside a standard policy maximizing task rewards, which we refer to as the task policy. While the EDDICT training procedure remains unchanged, experience for the latter is generated by a behavior policy which randomly switches between EDDICT's policy and the task policy at regular intervals (every 20 steps). The motivation is similar to recent work [8] on temporally extended ɛ-greedy: temporally coherent exploration can more rapidly cover the state space. Two separate networks were used to instantiate EDDICT and the task policy, so any potential benefits must arise from improved exploration rather than e.g.