Not enough data to create a plot.
Try a different view from the menu above.
Envisioning Recommendations on an LLM-Based Agent Platform
In recent years, large language model (LLM)–based agents have garnered widespread attention across various fields. Their impressive capabilities, such as natural language communication,21,23 instruction following,26,28 and task execution,22,38 have the potential to expand both the format of information carriers and the way in which information is exchanged. LLM-based agents can now evolve into domain experts, becoming novel information carriers with domain-specific knowledge.1,28 For example, a Travel Agent can retain travel-related information within its parameters. LLM-based agents are also showcasing a new form of information exchange, facilitating more intuitive and natural interactions with users through dialogue and task execution.24,34 Figure 1 shows an example of these capabilities, in which users engage in dialogue with a Travel Agent to obtain information and complete their travel plans.
In Pursuit of Professionalism
Robin K. Hill Is Computer Science a Profession? We computer scientists--many of us--like to think of ourselves as professionals, as do doctors and lawyers, and police officers, and accountants. But there are definitions of "profession," with criteria and expectations, that we fail to meet. Are we ready, collectively, to confront the criteria? Do we want to be card-carrying members of a learned institution of service?
Neuralink's third brain implant patient regains speech after being robbed of his voice by progressive disease - hear him speak in emotional video with help from Elon Musk's AI bot
An Arizona man has become the third person in the world to receive Neuralink's brain implant – letting him'speak' again in his own voice. Brad Smith has ALS, a progressive disease that makes him unable to move any part of his body, except his eyes and the corners of his mouth. The disease has robbed Mr Smith of his ability to speak, but the implant from Elon Musk's firm Neuralink has hooked up his brain to a computer. Around the size of five US quarters one on top of the other, the little chip lets the patient control the cursor on his MacBook Pro laptop to type. Then, Musk's Grok AI creates an accurate vocal clone, trained on vocal recordings of his actual voice before it was lost to the condition, to read the script.
Is our universe the ultimate computer? Scientist uncovers a major clue that we're all living in a simulation
For more than a quarter of a century since its release, 'The Matrix' has fueled modern fears that life is not all it seems. But according to a scientist, the classic movie's premise may not be completely science fiction. Melvin Vopson, an associate professor in physics at the University of Portsmouth, thinks gravity may be a sign that we're all living in a virtual simulation. Our universe is the'ultimate computer', Professor Vopson theorizes in a new paper. Gravity's pull – both on planet Earth and in outer space – is the universe trying to keep its vast amount of data organised, Professor Vopson claims.
The Download: China's manufacturers' viral moment, and how AI is changing creativity
Since the video was posted earlier this month, millions of TikTok users have watched as a young Chinese man in a blue T-shirt sits beside a traditional tea set and speaks directly to the camera in accented English: "Let's expose luxury's biggest secret." He stands and lifts what looks like an Hermès Birkin bag, one of the world's most exclusive and expensive handbags, before gesturing toward the shelves filled with more bags behind him. "You recognize them: Hermès, Louis Vuitton, Prada, Gucci--all crafted in our workshops." He ends by urging viewers to buy directly from his factory. Video "exposés" like this--where a sales agent breaks down the material cost of luxury goods, from handbags to perfumes to appliances--are everywhere on TikTok right now.
Is Keir Starmer being advised by AI? The UK government won't tell us
Thousands of civil servants at the heart of the UK government, including those working directly to support Prime Minister Keir Starmer, are using a proprietary artificial intelligence chatbot to carry out their work, New Scientist can reveal. Officials have refused to disclose on the record exactly how the tool is being used, whether the prime minister is receiving advice that has been prepared using AI or how civil servants are mitigating the risks of inaccurate or biased AI outputs. Experts say the lack of disclosure raises concerns about government transparency and the accuracy of information being used in government. After securing the world-first release of ChatGPT logs under freedom of information (FOI) legislation, New Scientist asked 20 government departments for records of their interactions with Redbox, a generative AI tool developed in house and trialled among UK government staff. The large language model-powered chatbot allows users to interrogate government documents and to "generate first drafts of briefings", according to one of the people behind its development.
UK regulator wants to ban apps that can make deepfake nude images of children
The UK's Children's Commissioner is calling for a ban on AI deepfake apps that create nude or sexual images of children, according to a new report. It states that such "nudification" apps have become so prevalent that many girls have stopped posting photos on social media. And though creating or uploading CSAM images is illegal, apps used to create deepfake nude images are still legal. "Children have told me they are frightened by the very idea of this technology even being available, let alone used. They fear that anyone -- a stranger, a classmate, or even a friend -- could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps." said Children's Commissioner Dame Rachel de Souza.
Exclusive: Trump Pushes Out AI Experts Hired By Biden
The Trump administration has laid out its own ambitious goals for recruiting more tech talent. On April 3, Russell Vought, Trump's Director of the Office of Management and Budget, released a 25-page memo for how federal leaders were expected to accelerate the government's use of AI. "Agencies should focus recruitment efforts on individuals that have demonstrated operational experience in designing, deploying, and scaling AI systems in high-impact environments," Vought wrote. Putting that into action will be harder than it needed to be, says Deirdre Mulligan, who directed the National Artificial Intelligence Initiative Office in the Biden White House. "The Trump Administration's actions have not only denuded the government of talent now, but I'm sure that for many folks, they will think twice about whether or not they want to work in government," Mulligan says. "It's really important to have stability, to have people's expertise be treated with the level of respect it ought to be and to have people not be wondering from one day to the next whether they're going to be employed."
Copilot Arena: A platform for code
Copilot Arena is a VSCode extension that collects human preferences of code directly from developers. As model capabilities improve, large language models (LLMs) are increasingly integrated into user environments and workflows. In particular, software developers code with LLM-powered tools in integrated development environments such as VS Code, IntelliJ, or Eclipse. While these tools are increasingly used in practice, current LLM evaluations struggle to capture how users interact with these tools in real environments, as they are often limited to short user studies, only consider simple programming tasks as opposed to real-world systems, or rely on web-based platforms removed from development environments. To address these limitations, we introduce Copilot Arena, an app designed to evaluate LLMs in real-world settings by collecting preferences directly in a developer's actual workflow.
Commissioner calls for ban on apps that make deepfake nude images of children
Artificial intelligence "nudification" apps that create deepfake sexual images of children should be immediately banned, amid growing fears among teenage girls that they could fall victim, the children's commissioner for England is warning. Girls said they were stopping posting images of themselves on social media out of a fear that generative AI tools could be used to digitally remove their clothes or sexualise them, according to the commissioner's report on the tools, drawing on children's experiences. Although it is illegal to create or share a sexually explicit image of a child, the technology enabling them remains legal, the report noted. "Children have told me they are frightened by the very idea of this technology even being available, let alone used. They fear that anyone – a stranger, a classmate, or even a friend – could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps," the commissioner, Dame Rachel de Souza, said.