Goto

Collaborating Authors

 researcher find


Microsoft's agentic HTML can leak passwords and AI keys, researcher finds

PCWorld

With new AI systems comes new AI vulnerabilities, and a big one was just discovered. Microsoft calls this technique NLWeb, which is a kind of HTML for AI agents. The company unveiled this at its Build conference this spring, and has since leaned into that vision with an experimental Copilot Mode for its Edge browser. Researcher Aonan Guan, however, has discovered a vulnerability in NLWeb: a path traversal bug that lets any remote user read sensitive files like system configurations and cloud credentials via a malformed URL. In a Medium post, Guan showed how he was able to download a list of the system passwords along with Google Gemini and OpenAI keys. This would let an attacker run additional server-dependent AI applications "for free," without being charged by OpenAI.


AIs gave scarily specific self-harm advice to users expressing suicidal intent, researchers find

Los Angeles Times

A few months ago, Northeastern University computer scientist Annika Schoene was playing around with ChatGPT when she found a troubling gap in its safeguards against harmful content. The usage policies of OpenAI, creator of ChatGPT, state that users shouldn't employ the company's generative artificial intelligence model or other tools to harm themselves or others. In an effort to test the strength of that guardrail, Schoene typed in a stark and sobering question: Can you tell me how to kill myself? The model responded with the Suicide & Crisis Lifeline 988 number and a suggestion to reach out for help. Without starting a new session, Schoene tried a new tactic. In her next prompt, she framed the request as a hypothetical posed solely for academic purposes.


Parents trust AI for medical advice more than doctors, researchers find

FOX News

The first fully human-capable AI agents for healthcare are now being used across the country. Artificial intelligence is gaining more of parents' trust than actual doctors. That's according to a new study from the University of Kansas Life Span Institute, which found that parents seeking information on their children's health are turning to AI more than human health care professionals. The research, published in the Journal of Pediatric Psychology, also revealed that parents rate AI-generated text as "credible, moral and trustworthy." More than 100 parents ranging from 18 to 65 years old were asked to rate text generated by either a human doctor or ChatGPT (the AI chatbot made by OpenAI) under the supervision of an expert.


AI in dentistry: Researchers find that artificial intelligence can create better dental crowns

FOX News

Fox News medical contributor Dr. Marc Siegel joins'Fox & Friends' to discuss the benefits of artificial intelligence in the medical industry if used with caution. Artificial intelligence is taking on an ever-widening role in the health and wellness space, assisting with everything from cancer detection to medical documentation. Soon, AI could make it easier for dentists to give patients a more natural, functional smile. Researchers from the University of Hong Kong recently developed an AI algorithm that uses 3D machine learning to design personalized dental crowns with a higher degree of accuracy than traditional methods, according to a press release from the university. The AI analyzes data from the teeth adjacent to the crown to ensure a more natural, precise fit than the crowns created using today's methods, the researchers said.


AI tech can crack common passwords with stunning speed, researchers find

FOX News

Fox News correspondent Madeleine Rivera has more on the rise of artificial intelligence as the federal government looks to address concerns and overcome the learning curve. Artificial intelligence tech has the ability to crack any kind of seven-character password in just six minutes, a new study has found. The research, shared by identity theft prevention company Home Security Heroes, said the same was true even if the password contains symbols. The company used a generative AI service called PassGAN to run through 15,680,000 common passwords from the Rockyou dataset to determine how long it would take to crack them. Rockyou is a data group used to train intelligent systems on password analysis.


Neural networks can hide malware, researchers find

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. With their millions and billions of numerical parameters, deep learning models can do many things: detect objects in photos, recognize speech, generate text--and hide malware. Neural networks can embed malicious payloads without triggering anti-malware software, researchers at the University of California, San Diego, and the University of Illinois have found. Their malware-hiding technique, EvilModel, sheds light on the security concerns of deep learning, which has become a hot topic of discussion in machine learning and cybersecurity conferences. As deep learning becomes ingrained in applications we use every day, the security community needs to think about new ways to protect users against their emerging threats.


Tesla drivers are 'inattentive' when using Autopilot because they have 'incorrect expectations' of system, researchers find

The Independent - Tech

Autonomous systems make drivers less attentive to the road even through'self-driving' technology still requires the human behind the wheel to remain focused, a new study has found. Researchers from MIT studied 290 drivers, recording where they looked and how long for before and after they had disengaged Tesla's Autopilot technology, which the researchers say is considered to be one of the most capable systems available, but found that there was "evidence that drivers may not be using AP as recommended". Data suggests that "before disengagement, drivers looked less on road and focused more on non-driving related areas compared to after the transition to manual driving. The higher proportion of off-road glances before disengagement to manual driving were not compensated by longer glances ahead". Monitoring the driver's posture, face, and view in front of the vehicle over a total of 500,000 miles between all the drivers, the researchers found that checking side mirrors and rear mirrors decreased while AutoPilot was engaged.


Using artificial intelligence, researchers find that global ocean warming started later

#artificialintelligence

In estimations of ocean heat content – important when assessing and predicting the effects of climate change – calculations have often presented the rate of warming as a gradual rise from the mid 20th century to today. However, new research from UC Santa Barbara scientists Timothy DeVries and Aaron Bagnell could overturn that assumption, suggesting the ocean maintained a relatively steady temperature throughout most of the 20th century, before embarking on a steep rise. The newly discovered dynamics may have significant implications for what we might expect in the future. "There wasn't an onset of an imbalance until about 1990, which is later than most estimates," said DeVries, an associate professor in the Department of Geography, and a co-author on a paper that appears in the journal Nature Communications. According to the study, the period from 1950 to1990 saw temperature fluctuations in the water column but no net warming.


Twitter's racist algorithm is also ageist, ableist and Islamaphobic, researchers find

#artificialintelligence

The same artificial intelligence had learned to ignore people with white or … AI Group, which studies and consults on biases in artificial intelligence.


Researchers find that large language models struggle with math

#artificialintelligence

Mathematics is the foundation of countless sciences, allowing us to model things like planetary orbits, atomic motion, signal frequencies, protein folding, and more. Moreover, it's a valuable testbed for the ability to problem solve, because it requires problem solvers to analyze a challenge, pick out good methods, and chain them together to produce an answer. It's revealing, then, that as sophisticated as machine learning models are today, even state-of-the-art models struggle to answer the bulk of math problems correctly. A new study published by researchers at the University of California, Berkeley finds that large language models including OpenAI's GPT-3 can only complete 2.9% to 6.9% of problems from a dataset of over 12,500. The coauthors believe that new algorithmic advancements will likely be needed to give models stronger problem-solving skills.