Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
A Compositional Atlas of Tractable Circuit Operations for Probabilistic Inference
Circuit representations are becoming the lingua franca to express and reason about tractable generative and discriminative models. In this paper, we show how complex inference scenarios for these models that commonly arise in machine learning--from computing the expectations of decision tree ensembles to information-theoretic divergences of sum-product networks--can be represented in terms of tractable modular operations over circuits. Specifically, we characterize the tractability of simple transformations--sums, products, quotients, powers, logarithms, and exponentials--in terms of sufficient structural constraints of the circuits they operate on, and present novel hardness results for the cases in which these properties are not satisfied. Building on these operations, we derive a unified framework for reasoning about tractable models that generalizes several results in the literature and opens up novel tractable inference scenarios.
43207fd5e34f87c48d584fc5c11befb8-Supplemental.pdf
Is Plug-in Solver Sample Efficient for Feature-based Reinfocement Learning? Thus, we will use Φ and φ to represent Λ and λ in Appendix C and Appendix D. We use P We use P (s, a) to denote the row vector of P that corresponds to (s, a). A detailed description is provided in [1]. We use 1 to denote a column vector with all components to be 1. We use [H] to denote {0, 1,, H 1}. Finite Horizon Markov Decision Process A Finite Horizon Markov decision process (FHMDP) is described by the tuple M = (S, A, P, r, H), which differs from DMDP only in that the discount factor γ is replaced by the horizon H. It is a generalized version of DMDP which includes two players competing with each other.
43207fd5e34f87c48d584fc5c11befb8-Paper.pdf
It is believed that a model-based approach for reinforcement learning (RL) is the key to reduce sample complexity. However, the understanding of the sample optimality of model-based RL is still largely missing, even for the linear case. This work considers sample complexity of finding an ɛ-optimal policy in a Markov decision process (MDP) that admits a linear additive feature representation, given only access to a generative model. We solve this problem via a plug-in solver approach, which builds an empirical model and plans in this empirical model via an arbitrary plug-in solver.
Exploiting Opponents under Utility Constraints in Sequential Games
Recently, game-playing agents based on AI techniques have demonstrated superhuman performance in several sequential games, such as chess, Go, and poker. Surprisingly, the multi-agent learning techniques that allowed to reach these achievements do not take into account the actual behavior of the human player, potentially leading to an impressive gap in performances. In this paper, we address the problem of designing artificial agents that learn how to effectively exploit unknown human opponents while playing repeatedly against them in an online fashion. We study the case in which the agent's strategy during each repetition of the game is subject to constraints ensuring that the human's expected utility is within some lower and upper thresholds. Our framework encompasses several real-world problems, such as human engagement in repeated game playing and human education by means of serious games. As a first result, we formalize a set of linear inequalities encoding the conditions that the agent's strategy must satisfy at each iteration in order to do not violate the given bounds for the human's expected utility. Then, we use such formulation in an upper confidence bound algorithm, and we prove that the resulting procedure suffers from sublinear regret and guarantees that the constraints are satisfied with high probability at each iteration. Finally, we empirically evaluate the convergence of our algorithm on standard testbeds of sequential games.
E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding
Recent advances in Video Large Language Models (Video-LLMs) have demonstrated their great potential in general-purpose video understanding. To verify the significance of these models, a number of benchmarks have been proposed to diagnose their capabilities in different scenarios. However, existing benchmarks merely evaluate models through video-level question-answering, lacking fine-grained event-level assessment and task diversity. To fill this gap, we introduce E.T. Bench (Event-Level & Time-Sensitive Video Understanding Benchmark), a large-scale and high-quality benchmark for open-ended event-level video understanding. Categorized within a 3-level task taxonomy, E.T. Bench encompasses 7.3K samples under 12 tasks with 7K videos (251.4h
China launches landmark mission to retrieve pristine asteroid samples
China has successfully launched a spacecraft as part of its first-ever mission to retrieve pristine asteroid samples, in what researchers have described as a "significant step" in Beijing's ambitions for interplanetary exploration. China's Long March 3B rocket lifted off at about 1.31am local time (18:30 GMT) on Thursday from the Xichang Satellite Launch Centre in southwest China's Sichuan province. It was carrying the Tianwen-2 spacecraft, a robotic probe that could make China the third nation to fetch pristine asteroid rocks. Announcing the launch, Chinese state-run news outlets said the "spacecraft unfolded its solar panels smoothly", and that the China National Space Administration (CNSA) had "declared the launch a success". Over the next year, Tianwen-2 will approach a small near-Earth asteroid some 10 million miles (16 million km) away, named "469219 Kamoʻoalewa", also known as 2016HO3.