Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
Nonlinear dynamics of localization in neural receptive fields
Localized receptive fields--neurons that are selective for certain contiguous spatiotemporal features of their input--populate early sensory regions of the mammalian brain. Unsupervised learning algorithms that optimize explicit sparsity or independence criteria replicate features of these localized receptive fields, but fail to explain directly how localization arises through learning without efficient coding, as occurs in early layers of deep neural networks and might occur in early sensory regions of biological systems. We consider an alternative model in which localized receptive fields emerge without explicit top-down efficiency constraints--a feedforward neural network trained on a data model inspired by the structure of natural images. Previous work identified the importance of non-Gaussian statistics to localization in this setting but left open questions about the mechanisms driving dynamical emergence. We address these questions by deriving the effective learning dynamics for a single nonlinear neuron, making precise how higher-order statistical properties of the input data drive emergent localization, and we demonstrate that the predictions of these effective dynamics extend to the many-neuron setting. Our analysis provides an alternative explanation for the ubiquity of localization as resulting from the nonlinear dynamics of learning in neural circuits.
More than 1bn earmarked for battlefield tech
Announcing the results of the review, the MoD said a new Digital Targeting Web would better connect soldiers on the ground with key information provided by satellites, aircraft and drones helping them target enemy threats faster. Defence Secretary John Healey said the technology announced in the review - which will harness Artificial Intelligence (AI) and software - also highlights lessons being learnt from the war in Ukraine. Ukraine is already using AI and software to speed up the process of identifying, and then hitting, Russian military targets. The review had been commissioned by the newly formed Labour government shortly after last year's election with Healey describing it as the "first of its kind". The government said the findings would be published in the first half of 2025, but did not give an exact date.
Learning 1D Causal Visual Representation with De-focus Attention Networks
Modality differences have led to the development of heterogeneous architectures for vision and language models. While images typically require 2D non-causal modeling, texts utilize 1D causal modeling. This distinction poses significant challenges in constructing unified multi-modal models. This paper explores the feasibility of representing images using 1D causal modeling. We identify an "over-focus" issue in existing 1D causal vision models, where attention overly concentrates on a small proportion of visual tokens.
A Appendix
We used the largest batch size that could fit in memory on our limited hardware, which was 256 for an image size of 224x224. For the learning rate (Adam [2] optimizer) we searched in the range of {0.001, 0.0001, 1e04, 5e-4, 5e-5}, with weight decay {0, 5e-4. We chose a weight decay of 5e-5 and learning rate of 5e-4 until the 4:6 split and 1e-4 afterwards. UT-Zappos we again used the Adam optimizer, with learning rate in the ranges {5e-5, 5e-4, 5e-3}, and weight decay {0, 5e-4. For the rest of the parameters we searched the same ranges as above, where the same choices were optimal as for AO-Clevr.
Supplementary Material
This supplementary material provides implementation details, hyper-parameters settings, additional results and visualisations. Section A presents a focus on the design choices we use for IMGEP-HOLMES Section B provides implementation details for the main paper evaluation procedure - B.1: Quantitative evaluation of diversity - B.2: Quantitative evaluation of Representational Similarity - B.3: Human-evaluator selection of the BC spaces for evaluating SLP and TLP diversity Section C provides all necessary implementation details for reproducing the main paper experiments - C.1: Lenia environment settings - C.2: Parameter-sampling policy ฮ settings for Lenia's initial state and update rule - C.3: Settings for training the BC spaces in IMGEP-VAE and IMGEP-HOLMES Section D provides additional results that complete the ones from the main paper - D.1: Complete RSA analysis of the hierarchy of behavioral characterizations learned in HOLMES - D.2: Additional IMGEP baselines with a monolithic BC space are compared - D.3: Ablation study of the impact of the lateral connections in HOLMES Section E discusses the comparison of HOLMES with other model-expansion architectures Section F provides qualitative visualisations of the hierarchical trees that were autonomously constructed by the different IMGEP-HOLMES variants. Source code: Please refer to the project website http://mayalenE.github.io/holmes/ Please refer to section 3 of the main paper for the step-by-step implementation choices and to section C in apppendix for the implementation details. Figure 6: Focus on the different design choices made for the HOLMES architecture. All non-leaf node VAEs are frozen as well as their incoming lateral connections (light grey). The leaf nodes are incrementally trained on their own niches of patterns (represented as colored squares above the embeddings) defined by the boundaries fitted at each node split (curved dotted lines in each BC space, represented as clouds). We summarize those components in Figure 6.
Hierarchically Organized Latent Modules for Exploratory Search in Morphogenetic Systems
Self-organization of complex morphological patterns from local interactions is a fascinating phenomenon in many natural and artificial systems. In the artificial world, typical examples of such morphogenetic systems are cellular automata. Yet, their mechanisms are often very hard to grasp and so far scientific discoveries of novel patterns have primarily been relying on manual tuning and ad hoc exploratory search. The problem of automated diversity-driven discovery in these systems was recently introduced [26, 62], highlighting that two key ingredients are autonomous exploration and unsupervised representation learning to describe "relevant" degrees of variations in the patterns. In this paper, we motivate the need for what we call Meta-diversity search, arguing that there is not a unique ground truth interesting diversity as it strongly depends on the final observer and its motives. Using a continuous game-of-life system for experiments, we provide empirical evidences that relying on monolithic architectures for the behavioral embedding design tends to bias the final discoveries (both for hand-defined and unsupervisedly-learned features) which are unlikely to be aligned with the interest of a final end-user. To address these issues, we introduce a novel dynamic and modular architecture that enables unsupervised learning of a hierarchy of diverse representations. Combined with intrinsically motivated goal exploration algorithms, we show that this system forms a discovery assistant that can efficiently adapt its diversity search towards preferences of a user using only a very small amount of user feedback.