nist
Leading US Research Lab Appears to Be Squeezing Out Foreign Scientists
House Democrats are demanding answers from the National Institute of Standards and Technology and urging it to halt rumored changes they say could undermine its mission. One of the US government's top scientific research labs is taking steps that could drive away foreign scientists, a shift lawmakers and sources tell WIRED could cost the country valuable expertise and damage the agency's credibility. The National Institute of Standards and Technology (NIST) helps determine the frameworks underpinning everything from cybersecurity to semiconductor manufacturing. Some of NIST's recent work includes establishing guidelines for securing AI systems and identifying health concerns with air purifiers and firefighting gloves. Many of the agency's thousands of employees, postdoctoral scientists, contractors, and guest researchers are brought in from around the world for their specialized expertise.
- North America > United States > Colorado (0.06)
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > Texas (0.05)
- (5 more...)
Here's how to generate a truly random number with quantum physics
Breakthroughs, discoveries, and DIY tips sent every weekday. Very little in this life is truly random. A coin flip is influenced by the flipper's force, its surrounding airflow, and gravity. Similar variables dictate rolling a pair of dice or shuffling a deck of cards, while even classical computing's cryptographic algorithms are theoretically susceptible to outside influence or bias. "True randomness is something that nothing in the universe can predict in advance," explained Krister Shalm, a physicist at the National Institute of Standards and Technology (NIST).
The National Institute of Standards and Technology Braces for Mass Firings
Sweeping layoffs architected by the Trump administration and the so-called Department of Government Efficiency may be coming as soon as this week at the National Institute of Standards and Technology (NIST), a non-regulatory agency responsible for establishing benchmarks that ensure everything from beauty products to quantum computers are safe and reliable. According to several current and former employees at NIST, the agency has been bracing for cuts since President Donald Trump took office last month and ordered billionaire Elon Musk and DOGE to slash spending across the federal government. The fears were heightened last week when some NIST workers witnessed a handful of people they believed to be associated with DOGE inside Building 225, which houses the NIST Information Technology Laboratory at the agency's Gaithersburg, Maryland campus, according to multiple people briefed on the sightings. The DOGE staff were seeking access to NIST's IT systems, one of the people said. Soon after the purported visit, NIST leadership told employees that DOGE staffers were not currently on campus, but that office space and technology were being provisioned for them, according to the same people.
How OpenAI stress-tests its large language models
The first paper describes how OpenAI directs an extensive network of human testers outside the company to vet the behavior of its models before they are released. The second paper presents a new way to automate parts of the testing process, using a large language model like GPT-4 to come up with novel ways to bypass its own guardrails. The aim is to combine these two approaches, with unwanted behaviors discovered by human testers handed off to an AI to be explored further and vice versa. Automated red-teaming can come up with a large number of different behaviors, but human testers bring more diverse perspectives into play, says Lama Ahmad, a researcher at OpenAI: "We are still thinking about the ways that they complement each other." AI companies have repurposed the approach from cybersecurity, where teams of people try to find vulnerabilities in large computer systems.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
The US Government Wants You--Yes, You--to Hunt Down Generative AI Flaws
At the 2023 Defcon hacker conference in Las Vegas, prominent AI tech companies partnered with algorithmic integrity and transparency groups to sic thousands of attendees on generative AI platforms and find weaknesses in these critical systems. This "red-teaming" exercise, which also had support from the US government, took a step in opening these increasingly influential yet opaque systems to scrutiny. Now, the ethical AI and algorithmic assessment nonprofit Humane Intelligence is taking this model one step further. On Wednesday, the group announced a call for participation with the US National Institute of Standards and Technology, inviting any US resident to participate in the qualifying round of a nationwide red-teaming effort to evaluate AI office productivity software. The qualifier will take place online and is open to both developers and anyone in the general public as part of NIST's AI challenges, known as Assessing Risks and Impacts of AI, or ARIA.
- North America > United States > Nevada > Clark County > Las Vegas (0.26)
- North America > United States > Virginia (0.06)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.77)
- Information Technology > Artificial Intelligence > Natural Language > Generation (0.65)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.57)
OpenAI vows to provide the US government early access to its next AI model
OpenAI will give the US AI Safety Institute early access to its next model as part of its safety efforts, Sam Altman has revealed in a tweet. Apparently, the company has been working with the consortium "to push forward the science of AI evaluations." The National Institute of Standards and Technology (NIST) has formally established the Artificial Intelligence Safety Institute earlier this year, though Vice President Kamala Harris announced it back in 2023 at the UK AI Safety Summit. Based on the NIST's description of the consortium, it's meant "to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world." The company, along with DeepMind, similarly pledged to share AI models with the UK government last year.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.90)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.78)
Amman City, Jordan: Toward a Sustainable City from the Ground Up
The idea of smart cities (SCs) has gained substantial attention in recent years. The SC paradigm aims to improve citizens' quality of life and protect the city's environment. As we enter the age of next-generation SCs, it is important to explore all relevant aspects of the SC paradigm. In recent years, the advancement of Information and Communication Technologies (ICT) has produced a trend of supporting daily objects with smartness, targeting to make human life easier and more comfortable. The paradigm of SCs appears as a response to the purpose of building the city of the future with advanced features. SCs still face many challenges in their implementation, but increasingly more studies regarding SCs are implemented. Nowadays, different cities are employing SC features to enhance services or the residents quality of life. This work provides readers with useful and important information about Amman Smart City.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Maryland > Montgomery County > Gaithersburg (0.05)
- Europe > France > Occitanie > Hérault > Montpellier (0.04)
- (17 more...)
- Overview (1.00)
- Research Report (0.64)
- Transportation > Infrastructure & Services (1.00)
- Transportation > Ground > Road (1.00)
- Information Technology > Security & Privacy (1.00)
- (8 more...)
Who Followed the Blueprint? Analyzing the Responses of U.S. Federal Agencies to the Blueprint for an AI Bill of Rights
Lage, Darren, Pruitt, Riley, Arnold, Jason Ross
This study examines the extent to which U.S. federal agencies responded to and implemented the principles outlined in the White House's October 2022 "Blueprint for an AI Bill of Rights." The Blueprint provided a framework for the ethical governance of artificial intelligence systems, organized around five core principles: safety and effectiveness, protection against algorithmic discrimination, data privacy, notice and explanation about AI systems, and human alternatives and fallback. Through an analysis of publicly available records across 15 federal departments, the authors found limited evidence that the Blueprint directly influenced agency actions after its release. Only five departments explicitly mentioned the Blueprint, while 12 took steps aligned with one or more of its principles. However, much of this work appeared to have precedents predating the Blueprint or motivations disconnected from it, such as compliance with prior executive orders on trustworthy AI. Departments' activities often emphasized priorities like safety, accountability and transparency that overlapped with Blueprint principles, but did not necessarily stem from it. The authors conclude that the non-binding Blueprint seems to have had minimal impact on shaping the U.S. government's approach to ethical AI governance in its first year. Factors like public concerns after high-profile AI releases and obligations to follow direct executive orders likely carried more influence over federal agencies. More rigorous study would be needed to definitively assess the Blueprint's effects within the federal bureaucracy and broader society.
- North America > United States > Virginia (0.05)
- North America > United States > New York (0.04)
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.48)
Are they REALLY taking AI seriously? Biden's flagship artificial intelligence safety lab is found to be riddled with black mold, pests and a leaky roof
With only a modest 10 million budget to help regulate an industry of billionaires, Biden's new AI safety lab is now struggling with just the safety of its own facilities. 'Chronic underfunding' of the National Institute of Standards and Technology (NIST), the federal lab that will house the new US AI Safety Institute, has produced black mold, leaky ceilings, and a dead technician crushed by a concrete slab, reports say. Despite calls from scientists and entrepreneurs who have described'the risk of extinction from AI' as on par with'pandemics and nuclear war,' GOP deficit hawks in Congress pushed for a 10-percent budget cut to NIST -- and Biden approved. One former senior NIST official reported seeing'Home Depot dehumidifiers or portable AC units all over the place' bought by staff to help dry and slow the mold. Another reported indoor incessant leaks during rainy weather that required staff to'tarp up' critical electronic equipment.
- North America > United States > Maryland > Montgomery County > Gaithersburg (0.06)
- North America > United States > Colorado > Boulder County > Boulder (0.06)
- North America > United States > Maryland > Montgomery County > Rockville (0.05)
- (2 more...)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government > FDA (0.31)
Coordinated Disclosure for AI: Beyond Security Vulnerabilities
This legal action ignited a heated debate, contributing to a growing series of lawsuits against AI providers [9-11, 54]. This incident underscores the inadequacy of current AI harm reporting mechanisms, leaving small harmed parties with limited recourse unless backed by substantial legal support or media awareness, despite the recognized potential for improving AI systems by exposing issues [78]. Current AI accountability initiatives primarily rely on periodic audits, emphasizing repetitive assessments but lacking a structured reporting framework for user-identified issues post-deployment. This audit-centric paradigm is reflected in influential policies such as the U.S. Executive Order on AI [93], the EU's draft AI Act [43], and New York City's Local Law 144[69]. However, this approach falls short when compared to the more comprehensive Coordinated Vulnerability Disclosure(CVD) processes standard in software security. Coordinated Vulnerability Disclosure (CVD) plays a crucial role as a mechanism for independent researchers to report newly identified vulnerabilities to affected vendors and the public [58]. This process enables transparent remediation before potential exploitation by malicious actors and has become a vital practice enshrined in government regulations and industry standards. Notably, the FDA mandates the implementation of CVD programs for medical device companies to enhance cybersecurity[96]. While CVD has demonstrated effectiveness in traditional software security, its direct application to machine learning (ML) systems faces unique challenges.
- Europe (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > South Carolina > Charleston County > North Charleston (0.04)
- (3 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government > FDA (0.34)