Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
AI can spontaneously develop human-like communication, study finds
Artificial intelligence can spontaneously develop human-like social conventions, a study has found. The research, undertaken in collaboration between City St George's, University of London and the IT University of Copenhagen, suggests that when large language model (LLM) AI agents such as ChatGPT communicate in groups without outside involvement they can begin to adopt linguistic forms and social norms the same way that humans do when they socialise. The study's lead author, Ariel Flint Ashery, a doctoral researcher at City St George's, said the group's work went against the majority of research done into AI, as it treated AI as a social rather than solitary entity. "Most research so far has treated LLMs in isolation but real-world AI systems will increasingly involve many interacting agents," said Ashery. "We wanted to know: can these models coordinate their behaviour by forming conventions, the building blocks of a society? The answer is yes, and what they do together can't be reduced to what they do alone."
When it comes to crime, you can't algorithm your way to safety
The UK government's proposed AI-powered crime prediction tool, designed to flag individuals deemed "high risk" for future violence based on personal data like mental health history and addiction, marks a provocative new frontier. Elsewhere, Argentina's new Artifical Intelligence Unit for Security intends to use machine learning for crime prediction and real-time surveillance. And in some US cities, AI facial recognition is paired with street surveillance to track suspects. The promise of anticipating violence Minority Report-style is compelling.
Who needs Eurovision when we have the Dance Your PhD contest?
Feedback is New Scientist's popular sideways look at the latest science and technology news. You can submit items you believe may amuse readers to Feedback by emailing feedback@newscientist.com Saturday 17 May will see the final of this year's Eurovision Song Contest, which will be the most over-the-top evening of television since, well, the previous Eurovision. Feedback is deeply relieved that Feedback Jr appears not to be interested this year, so we might escape having to sit up and watch the entire thing. While we are deeply supportive of the contest's kind and welcoming vibe, most of the songs make our ears bleed.
It's raining tiny toxic frogs
Breakthroughs, discoveries, and DIY tips sent every weekday. Poison dart frogs are hard to miss. They're bright, agile, and as their name suggests, toxic. But at least a few of these showy amphibians have gone under the radar, until now. Scientists surveying a difficult to reach area of the Brazilian Amazon report two new species in a set of recent papers. The first, published in April in the journal ZooKeys, describes the teal and black Ranitomeya aquamarina.
Labour's open door to big tech leaves critics crying foul
The problem with the UK, according to the former Google boss Eric Schmidt, is that it has "so many ways that people can say no". However, for some critics of the Labour government, it has a glaring issue with saying yes: to big tech. Schmidt made his comment in a Q&A conversation with Keir Starmer at a big investment summit in October last year. The prominent position of a tech bigwig at the event underlined the importance of the sector to a government that has made growth a priority and believes the sector is crucial to achieving it. Top US tech firms have a big presence in the UK, including Google, Mark Zuckerberg's Meta, Amazon, Apple, Microsoft and Palantir, the data intelligence firm co-founded by the Maga movement backer Peter Thiel.
Rank Diminishing in Deep Neural Networks
It is an instance of a key structural condition that applies across broad domains of machine learning. In particular, the assumption of low-rank feature representations led to algorithmic developments in many architectures. For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear. To fill this gap, we perform a rigorous study on the behavior of network rank, focusing particularly on the notion of rank deficiency. We theoretically establish a universal monotone decreasing property of network ranks from the basic rules of differential and algebraic composition, and uncover rank deficiency of network blocks and deep function coupling.
Approximate Value Equivalence
Model-based reinforcement learning agents must make compromises about which aspects of the environment their models should capture. The value equivalence (VE) principle posits that these compromises should be made considering the model's eventual use in value-based planning. Given sets of functions and policies, a model is said to be order- k VE to the environment if k applications of the Bellman operators induced by the policies produce the correct result when applied to the functions. Prior work investigated the classes of models induced by VE when we vary k and the sets of policies and functions. This gives rise to a rich collection of topological relationships and conditions under which VE models are optimal for planning.
Multi-Objective Deep Learning with Adaptive Reference Vectors
Many deep learning models involve optimizing multiple objectives. Since objectives are often conflicting, we aim to get diverse and representative trade-off solutions among these objectives. Gradient-based multi-objective optimization (MOO) algorithms using reference vectors have shown promising performance. However, they may still produce undesirable solutions due to mismatch between the pre-specified reference vectors and the problem's underlying Pareto front. In this paper, we propose a novel gradient-based MOO algorithm with adaptive reference vectors. We formulate reference vector adaption as a bilevel optimization problem, and solve it with an efficient solver.
On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias
We study the dynamics and implicit bias of gradient flow (GF) on univariate ReLU neural networks with a single hidden layer in a binary classification setting. We show that when the labels are determined by the sign of a target network with r neurons, with high probability over the initialization of the network and the sampling of the dataset, GF converges in direction (suitably defined) to a network achieving perfect training accuracy and having at most \mathcal{O}(r) linear regions, implying a generalization bound. Unlike many other results in the literature, under an additional assumption on the distribution of the data, our result holds even for mild over-parameterization, where the width is \tilde{\mathcal{O}}(r) and independent of the sample size.
Improved Fine-Tuning by Better Leveraging Pre-Training Data
As a dominant paradigm, fine-tuning a pre-trained model on the target data is widely used in many deep learning applications, especially for small data sets. However, recent studies have empirically shown that training from scratch has the final performance that is no worse than this pre-training strategy once the number of training samples is increased in some vision tasks. In this work, we revisit this phenomenon from the perspective of generalization analysis by using excess risk bound which is popular in learning theory. The result reveals that the excess risk bound may have a weak dependency on the pre-trained model. The observation inspires us to leverage pre-training data for fine-tuning, since this data is also available for fine-tuning.