Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
The Google Pixel 9 Pro XL is down to its lowest-ever price at Amazon
SAVE 300: As of March 21, the Google Pixel 9 Pro XL is on sale for 1019 at Amazon. This deal saves you 23% on list price. A new phone doesn't have to break the bank, especially if you're looking to buy one outright. And you don't need to wait for Amazon's Spring Sale to officially kick off (March 25) either. Just check out our review to see why we're such big fans.
AllClear: A Comprehensive Dataset and Benchmark for Cloud Removal in Satellite Imagery
Clouds in satellite imagery pose a significant challenge for downstream applications. A major challenge in current cloud removal research is the absence of a comprehensive benchmark and a sufficiently large and diverse training dataset. To address this problem, we introduce the largest public dataset -- AllClear for cloud removal, featuring 23,742 globally distributed regions of interest (ROIs) with diverse land-use patterns, comprising 4 million images in total. Each ROI includes complete temporal captures from the year 2022, with (1) multi-spectral optical imagery from Sentinel-2 and Landsat 8/9, (2) synthetic aperture radar (SAR) imagery from Sentinel-1, and (3) auxiliary remote sensing products such as cloud masks and land cover maps. We validate the effectiveness of our dataset by benchmarking performance, demonstrating the scaling law -- the PSNR rises from 28.47 to 33.87 with 30 more data, and conducting ablation studies on the temporal length and the importance of individual modalities. This dataset aims to provide comprehensive coverage of the Earth's surface and promote better cloud removal results.
A Broader Impact and Limitation Discussion
Monitoring, estimating, and explaining performance of deployed ML models is a growing area with significant economic and social impact. In this paper, we propose SJS, a new data distribution shift model to consider when both labels and features shift after model deployment. We show how SJS generalizes existing data shift models, and further propose SEES, a generic framework that efficiently explains and estimates an ML model's performance under SJS. This may serve as a monitoring tool to help ML practitioners recognize performance changes, discover potential fairness issues and take appropriate business decisions (e.g., switching to other models or debugging the existing ones). One limitation in general is adaption to continuously changing data streams.
A Proof of Proposition
A.1 Problem Definition We consider two binary classification tasks, with Y The task labels are drawn from two different probabilities. For simplicity, we assume the probability to sample the two label value is balanced, i.e., P (Y = 1) = P (Y = 1) = 0.5. Our conclusion could be extended to unbalanced distribution. In this paper, we mainly study the spurious correlation between task labels. We first consider the setting that we're given infinite samples. If we assume there's no traditional factor-label spurious correlation in single task learning, the bayes optimal classifier will only take each task's causal factor as feature, and assign zero weights to non-causal factors.
Follow-the-Perturbed-Leader for Adversarial Markov Decision Processes with Bandit Feedback
We consider regret minimization for Adversarial Markov Decision Processes (AMDPs), where the loss functions are changing over time and adversarially chosen, and the learner only observes the losses for the visited state-action pairs (i.e., bandit feedback). While there has been a surge of studies on this problem using Online-Mirror-Descent (OMD) methods, very little is known about the Follow-the-Perturbed-Leader (FTPL) methods, which are usually computationally more efficient and also easier to implement since it only requires solving an offline planning problem. Motivated by this, we take a closer look at FTPL for learning AMDPs, starting from the standard episodic finite-horizon setting. We find some unique and intriguing difficulties in the analysis and propose a workaround to eventually show that FTPL is also able to achieve near-optimal regret bounds in this case. More importantly, we then find two significant applications: First, the analysis of FTPL turns out to be readily generalizable to delayed bandit feedback with order-optimal regret, while OMD methods exhibit extra difficulties (Jin et al., 2022). Second, using FTPL, we also develop the first no-regret algorithm for learning communicating AMDPs in the infinite-horizon setting with bandit feedback and stochastic transitions. Our algorithm is efficient assuming access to an offline planning oracle, while even for the easier full-information setting, the only existing algorithm (Chandrasekaran and Tewari, 2021) is computationally inefficient.