Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
Neutralizing Self-Selection Bias in Sampling for Sortition Paul Gölz Computer Science Department Computer Science Department Carnegie Mellon University Carnegie Mellon University Anupam Gupta
Sortition is a political system in which decisions are made by panels of randomly selected citizens. The process for selecting a sortition panel is traditionally thought of as uniform sampling without replacement, which has strong fairness properties. In practice, however, sampling without replacement is not possible since only a fraction of agents is willing to participate in a panel when invited, and different demographic groups participate at different rates. In order to still produce panels whose composition resembles that of the population, we develop a sampling algorithm that restores close-to-equal representation probabilities for all agents while satisfying meaningful demographic quotas. As part of its input, our algorithm requires probabilities indicating how likely each volunteer in the pool was to participate. Since these participation probabilities are not directly observable, we show how to learn them, and demonstrate our approach using data on a real sortition panel combined with information on the general population in the form of publicly available survey data.
A Overall procedure of consistency regularization for ABC
Supplementary Material for the Paper entitled "ABC: Auxiliary Balanced Classifier for Class-Imbalanced Semi-Supervised Learning" Figure 1 illustrates the overall procedure of consistency regularization for the ABC. Detailed procedure is described in Section 3.4 of the main paper. The pseudo code of the proposed algorithm is presented in Algorithm 1. The for loop (lines 2 14) can be run in parallel. Two types of class imbalance for the considered datasets are illustrated in Figure 2. In Figure 2 (b), we can see that each minority class has a very small amount of data. Existing SSL algorithms can be significantly biased toward majority classes under step imbalanced settings.
Appendix
In this section, we provide and expand upon a toy example. Example 5. Suppose that the regulatory guideline requires that users in the same geographical location receive similar weather forecasts. This can be written as "the weather forecasts that are selected by F should be similar for all users in the same geographical location", and S could be a randomly generated set of user pairs, where each pair corresponds to two (hypothetical) users in the same geographical location, and S could contain pairs across many locations. The question of how to quantify "similarity" is addressed in Section 2.1. The toy example in Example 5 is illustrated in the right-most panel.
RTFormer: Efficient Design for Real-Time Semantic Segmentation with Transformer Jian Wang 1 Qiman Wu1
Recently, transformer-based networks have shown impressive results in semantic segmentation. Yet for real-time semantic segmentation, pure CNN-based approaches still dominate in this field, due to the time-consuming computation mechanism of transformer. We propose RTFormer, an efficient dual-resolution transformer for real-time semantic segmenation, which achieves better trade-off between performance and efficiency than CNN-based models. To achieve high inference efficiency on GPU-like devices, our RTFormer leverages GPU-Friendly Attention with linear complexity and discards the multi-head mechanism. Besides, we find that cross-resolution attention is more efficient to gather global context information for high-resolution branch by spreading the high level knowledge learned from low-resolution branch. Extensive experiments on mainstream benchmarks demonstrate the effectiveness of our proposed RTFormer, it achieves state-of-the-art on Cityscapes, CamVid and COCOStuff, and shows promising results on ADE20K.
Supplemental Material for Wavelet Flow: Fast Training of High Resolution Normalizing Flows Jason J. Yu1,3 and Marcus A. Brubaker
In this section, we describe the annealed sampling process. First, we describe how MCMC can be used to draw samples from any distribution constructed as a normalizing flow. Next, we describe how the Wavelet Flow structure in particular is used to enable faster sampling. B.1 MCMC on an Annealed Flow The target distribution for MCMC is the annealed normalizing flow and can be written as: π Since we know the form of the density is closely related to a known normalizing flow, we can use the inverse of this flow, g, to reparameterize the density such that it becomes exactly Gaussian (and hence easier to sample) when γ = 1. For γ 1 the geometry should still be close to Gaussian and hence easier to sample from, particularly with values of γ close to 1. Reparameterizing the annealed distribution in terms of z gives: π In practice, we found that sampling in terms of z using the NUTS algorithm [3] is more efficient and can be done with a larger step size and fewer divergences, compared to sampling in terms of x.
Everything you need to know from Google I/O 2025
From the opening AI-influenced intro video set to "You Get What You Give" by New Radicals to CEO Sundar Pichai's sign-off, Google I/O 2025 was packed with news and updates for the tech giant and its products. And when we say packed, we mean it, as this year's Google I/O clocked in at nearly two hours long. During that time, Google shared some big wins for its AI products, such as Gemini topping various categories on the LMArena leaderboard. Another example that Google seemed really proud of was the fact that Gemini completed Pokémon Blue a few weeks ago. But, we know what you're really here for: Product updates and new product announcements.