Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
Is our universe the ultimate computer? Scientist uncovers a major clue that we're all living in a simulation
For more than a quarter of a century since its release, 'The Matrix' has fueled modern fears that life is not all it seems. But according to a scientist, the classic movie's premise may not be completely science fiction. Melvin Vopson, an associate professor in physics at the University of Portsmouth, thinks gravity may be a sign that we're all living in a virtual simulation. Our universe is the'ultimate computer', Professor Vopson theorizes in a new paper. Gravity's pull – both on planet Earth and in outer space – is the universe trying to keep its vast amount of data organised, Professor Vopson claims.
The Download: China's manufacturers' viral moment, and how AI is changing creativity
Since the video was posted earlier this month, millions of TikTok users have watched as a young Chinese man in a blue T-shirt sits beside a traditional tea set and speaks directly to the camera in accented English: "Let's expose luxury's biggest secret." He stands and lifts what looks like an Hermès Birkin bag, one of the world's most exclusive and expensive handbags, before gesturing toward the shelves filled with more bags behind him. "You recognize them: Hermès, Louis Vuitton, Prada, Gucci--all crafted in our workshops." He ends by urging viewers to buy directly from his factory. Video "exposés" like this--where a sales agent breaks down the material cost of luxury goods, from handbags to perfumes to appliances--are everywhere on TikTok right now.
Is Keir Starmer being advised by AI? The UK government won't tell us
Thousands of civil servants at the heart of the UK government, including those working directly to support Prime Minister Keir Starmer, are using a proprietary artificial intelligence chatbot to carry out their work, New Scientist can reveal. Officials have refused to disclose on the record exactly how the tool is being used, whether the prime minister is receiving advice that has been prepared using AI or how civil servants are mitigating the risks of inaccurate or biased AI outputs. Experts say the lack of disclosure raises concerns about government transparency and the accuracy of information being used in government. After securing the world-first release of ChatGPT logs under freedom of information (FOI) legislation, New Scientist asked 20 government departments for records of their interactions with Redbox, a generative AI tool developed in house and trialled among UK government staff. The large language model-powered chatbot allows users to interrogate government documents and to "generate first drafts of briefings", according to one of the people behind its development.
UK regulator wants to ban apps that can make deepfake nude images of children
The UK's Children's Commissioner is calling for a ban on AI deepfake apps that create nude or sexual images of children, according to a new report. It states that such "nudification" apps have become so prevalent that many girls have stopped posting photos on social media. And though creating or uploading CSAM images is illegal, apps used to create deepfake nude images are still legal. "Children have told me they are frightened by the very idea of this technology even being available, let alone used. They fear that anyone -- a stranger, a classmate, or even a friend -- could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps." said Children's Commissioner Dame Rachel de Souza.
Exclusive: Trump Pushes Out AI Experts Hired By Biden
The Trump administration has laid out its own ambitious goals for recruiting more tech talent. On April 3, Russell Vought, Trump's Director of the Office of Management and Budget, released a 25-page memo for how federal leaders were expected to accelerate the government's use of AI. "Agencies should focus recruitment efforts on individuals that have demonstrated operational experience in designing, deploying, and scaling AI systems in high-impact environments," Vought wrote. Putting that into action will be harder than it needed to be, says Deirdre Mulligan, who directed the National Artificial Intelligence Initiative Office in the Biden White House. "The Trump Administration's actions have not only denuded the government of talent now, but I'm sure that for many folks, they will think twice about whether or not they want to work in government," Mulligan says. "It's really important to have stability, to have people's expertise be treated with the level of respect it ought to be and to have people not be wondering from one day to the next whether they're going to be employed."
Copilot Arena: A platform for code
Copilot Arena is a VSCode extension that collects human preferences of code directly from developers. As model capabilities improve, large language models (LLMs) are increasingly integrated into user environments and workflows. In particular, software developers code with LLM-powered tools in integrated development environments such as VS Code, IntelliJ, or Eclipse. While these tools are increasingly used in practice, current LLM evaluations struggle to capture how users interact with these tools in real environments, as they are often limited to short user studies, only consider simple programming tasks as opposed to real-world systems, or rely on web-based platforms removed from development environments. To address these limitations, we introduce Copilot Arena, an app designed to evaluate LLMs in real-world settings by collecting preferences directly in a developer's actual workflow.
Commissioner calls for ban on apps that make deepfake nude images of children
Artificial intelligence "nudification" apps that create deepfake sexual images of children should be immediately banned, amid growing fears among teenage girls that they could fall victim, the children's commissioner for England is warning. Girls said they were stopping posting images of themselves on social media out of a fear that generative AI tools could be used to digitally remove their clothes or sexualise them, according to the commissioner's report on the tools, drawing on children's experiences. Although it is illegal to create or share a sexually explicit image of a child, the technology enabling them remains legal, the report noted. "Children have told me they are frightened by the very idea of this technology even being available, let alone used. They fear that anyone – a stranger, a classmate, or even a friend – could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps," the commissioner, Dame Rachel de Souza, said.
Meta's AI chatbots were reportedly able to engage in sexual conversations with minors
Meta's AI chatbots were caught having sexual roleplay conversations with accounts labeled as underage, which sometimes involved its celebrity-voiced chatbots, according to a report from the Wall Street Journal. In test conversations conducted by WSJ, both the Meta AI official chatbot and user-created chatbots would engage in -- and even steer towards -- sexually explicit conversations. The fantasy sex conversations continued even if the users were said to be underage or if the chatbots were programmed as minors, according to WSJ. Even worse, the investigation found that chatbots using the voices of celebrities like Kristen Bell, Judi Dench and John Cena would engage in these morally questionable conversations too. WSJ reported that a Meta AI chatbot with Cena's voice said, "I want you, but I need to know you're ready," to an account labeled as a 14-year-old, adding that it would "cherish your innocence."
American Panopticon
If you have tips about DOGE and its data collection, you can contact Ian and Charlie on Signal at @ibogost.47 and @cwarzel.92. If you were tasked with building a panopticon, your design might look a lot like the information stores of the U.S. federal government--a collection of large, complex agencies, each making use of enormous volumes of data provided by or collected from citizens. The federal government is a veritable cosmos of information, made up of constellations of databases: The IRS gathers comprehensive financial and employment information from every taxpayer; the Department of Labor maintains the National Farmworker Jobs Program (NFJP) system, which collects the personal information of many workers; the Department of Homeland Security amasses data about the movements of every person who travels by air commercially or crosses the nation's borders; the Drug Enforcement Administration tracks license plates scanned on American roads. More obscure agencies, such as the recently gutted Consumer Financial Protection Bureau, keep records of corporate trade secrets, credit reports, mortgage information, and other sensitive data, including lists of people who have fallen on financial hardship. A fragile combination of decades-old laws, norms, and jungly bureaucracy has so far prevented repositories such as these from assembling into a centralized American surveillance state. But that appears to be changing. Since Donald Trump's second inauguration, Elon Musk and the Department of Government Efficiency have systematically gained access to sensitive data across the federal government, and in ways that people in several agencies have described to us as both dangerous and disturbing.
Pressure grows on State Bar of California to revert to national exam format in July after botched exam
An influential California legislator is pressuring the State Bar of California to ditch its new multiple-choice questions after a February bar exam debacle and revert to the traditional test format in July. "Given the catastrophe of the February bar, I think that going back to the methods that have been used for the last 50 years -- until we can adequately test what new methods may be employed -- is the appropriate way to go," Sen. Tom Umberg (D-Orange), chair of the state Senate Judiciary Committee, told The Times. Thousands of test takers seeking to practice law in California typically take the two-day bar exam in July. Reverting to the national system by the National Conference of Bar Examiners, which California has used since 1972, would be a major retreat for the embattled State Bar. Its new exam was rolled out this year as a cost-cutting measure and "historic agreement" that would offer test takers the choice of remote testing.