Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
Google made it clear at I/O that AI will soon be inescapable
Unsurprisingly, the bulk of Google's announcements at I/O this week focused on AI. Although past Google I/O events also heavily leaned on AI, what made this year's announcements different is that the features were spread across nearly every Google offering and touched nearly every task people partake in every day. Because I'm an AI optimist, and my job as an AI editor involves testing tools, I have always been pretty open to using AI to optimize my daily tasks. However, Google's keynote made it clear that even those who may not be as open to it will soon find it unavoidable. Moreover, the tech giants' announcements shed light on the industry's future, revealing three major trends about where AI is headed, which you can read more about below.
Improved Feature Distillation via Projector Ensemble Yudong Chen 1 Sen Wang
In knowledge distillation, previous feature distillation methods mainly focus on the design of loss functions and the selection of the distilled layers, while the effect of the feature projector between the student and the teacher remains underexplored. In this paper, we first discuss a plausible mechanism of the projector with empirical evidence and then propose a new feature distillation method based on a projector ensemble for further performance improvement. We observe that the student network benefits from a projector even if the feature dimensions of the student and the teacher are the same. Training a student backbone without a projector can be considered as a multi-task learning process, namely achieving discriminative feature extraction for classification and feature matching between the student and the teacher for distillation at the same time. We hypothesize and empirically verify that without a projector, the student network tends to overfit the teacher's feature distributions despite having different architecture and weights initialization.
75877cb75154206c4e65e76b88a12712-Paper.pdf
The ability to detect and count certain substructures in graphs is important for solving many tasks on graph-structured data, especially in the contexts of computational chemistry and biology as well as social network analysis. Inspired by this, we propose to study the expressive power of graph neural networks (GNNs) via their ability to count attributed graph substructures, extending recent works that examine their power in graph isomorphism testing and function approximation. We distinguish between two types of substructure counting: induced-subgraph-count and subgraph-count, and establish both positive and negative answers for popular GNN architectures. Specifically, we prove that Message Passing Neural Networks (MPNNs), 2-Weisfeiler-Lehman (2-WL) and 2-Invariant Graph Networks (2-IGNs) cannot perform induced-subgraph-count of any connected substructure consisting of 3 or more nodes, while they can perform subgraph-count of star-shaped substructures. As an intermediary step, we prove that 2-WL and 2-IGNs are equivalent in distinguishing non-isomorphic graphs, partly answering an open problem raised in [38]. We also prove positive results for k-WL and k-IGNs as well as negative results for k-WL with a finite number of iterations. We then conduct experiments that support the theoretical results for MPNNs and 2-IGNs. Moreover, motivated by substructure counting and inspired by [45], we propose the Local Relational Pooling model and demonstrate that it is not only effective for substructure counting but also able to achieve competitive performance on molecular prediction tasks.
Pretraining with Random Noise for Fast and Robust Learning without Weight Transport Sang Wan Lee 1,2,3 Se-Bum Paik
The brain prepares for learning even before interacting with the environment, by refining and optimizing its structures through spontaneous neural activity that resembles random noise. However, the mechanism of such a process has yet to be understood, and it is unclear whether this process can benefit the algorithm of machine learning. Here, we study this issue using a neural network with a feedback alignment algorithm, demonstrating that pretraining neural networks with random noise increases the learning efficiency as well as generalization abilities without weight transport. First, we found that random noise training modifies forward weights to match backward synaptic feedback, which is necessary for teaching errors by feedback alignment. As a result, a network with pre-aligned weights learns notably faster and reaches higher accuracy than a network without random noise training, even comparable to the backpropagation algorithm.
747d3443e319a22747fbb873e8b2f9f2-Supplemental.pdf
A.1 Bayesian Optimization Based Search In this procedure, we build a model for the accuracy of unevaluated BSSC based on evaluated one. Gaussian Process (GP, [1]) is a good method to achieve this in Bayesian optimization literature [2]. When selecting the first BSSC, equation 2 can be used directly. Therefore, we use the expected value of EI function (EEI, [4]) instead. The value of equation 3 is calculated via Monte Carlo simulations [4] in our method.
AutoBSS: An Efficient Algorithm for Block Stacking Style Search
Neural network architecture design mostly focuses on the new convolutional operator or special topological structure of network block, little attention is drawn to the configuration of stacking each block, called Block Stacking Style (BSS). Recent studies show that BSS may also have an unneglectable impact on networks, thus we design an efficient algorithm to search it automatically. The proposed method, AutoBSS, is a novel AutoML algorithm based on Bayesian optimization by iteratively refining and clustering Block Stacking Style Coding (BSSC), which can find optimal BSS in a few trials without biased evaluation.
trained from scratch (Section 4.5), while most results of other papers or model zoo are fine-tuned from a pre-trained
We appreciate the reviewers for the constructive comments on this paper. One common concern is our baseline for RetinaNet/Mask-RCNN is not strong. ImageNet Pre-Training." is comparable with our baseline (39.5% vs 39.24%). R1Q1: How do the latencies change on GPU? R1Q2: The improvements are not large. R1Q3: Compare to prior BSS search methods like POP[22] in Table 1.
Is the Nintendo Switch the best console of its generation โ or just the most meaningful to me?
The lifespan of a games console has extended a lot since I was a child. In the 1990s, this kind of technology would be out of date after just a couple of years. There would be some tantalising new machine out before you knew it, everybody competing to be on the cutting edge: the Game Boy and Sega Genesis/Mega Drive in 1989 were followed by the Game Gear in 1990 and the Super NES in 1991. Five years was a long life for a gaming machine. The Nintendo Switch 2 will be released in a couple of weeks, more than eight years since I first picked an original Switch up off its dock and marvelled at the instant transition to portable play.
Plants can hear tiny wing flaps of pollinators
Breakthroughs, discoveries, and DIY tips sent every weekday. Our planet runs on pollinators. Without bees, moths, weevils, and more zooming around and spreading plants' reproductive cells, plants and important crops would not grow. Without plants we would not breathe or eat. When these crucial pollinating species visit flowers and other plants, they produce a number of characteristic sounds, such as wing flapping when hovering, landing, and taking off.
I tried Google's XR glasses and they already beat my Meta Ray-Bans in 3 ways
Google unveiled a slew of new AI tools and features at I/O, dropping the term Gemini 95 times and AI 92 times. However, the best announcement of the entire show wasn't an AI feature; rather, the title went to one of the two hardware products announced -- the Android XR glasses. Also: I'm an AI expert, and these 8 announcements at Google I/O impressed me the most For the first time, Google gave the public a look at its long-awaited smart glasses, which pack Gemini's assistance, in-lens displays, speakers, cameras, and mics into the form factor of traditional eyeglasses. I had the opportunity to wear them for five minutes, during which I ran through a demo of using them to get visual Gemini assistance, take photos, and get navigation directions. As a Meta Ray-Bans user, I couldn't help but notice the similarities and differences between the two smart glasses -- and the features I now wish my Meta pair had.