tabassi
The National Institute of Standards and Technology Braces for Mass Firings
Sweeping layoffs architected by the Trump administration and the so-called Department of Government Efficiency may be coming as soon as this week at the National Institute of Standards and Technology (NIST), a non-regulatory agency responsible for establishing benchmarks that ensure everything from beauty products to quantum computers are safe and reliable. According to several current and former employees at NIST, the agency has been bracing for cuts since President Donald Trump took office last month and ordered billionaire Elon Musk and DOGE to slash spending across the federal government. The fears were heightened last week when some NIST workers witnessed a handful of people they believed to be associated with DOGE inside Building 225, which houses the NIST Information Technology Laboratory at the agency's Gaithersburg, Maryland campus, according to multiple people briefed on the sightings. The DOGE staff were seeking access to NIST's IT systems, one of the people said. Soon after the purported visit, NIST leadership told employees that DOGE staffers were not currently on campus, but that office space and technology were being provisioned for them, according to the same people.
Automating Detective Work
Every fingerprint is believed to be unique, making it possible to identify an individual by matching a new fingerprint with an image on file, whether to unlock a mobile phone, access a bank account, or solve a murder. Fingerprint examiners, however, do not always agree on whether two print images match and, asked to recheck their work after several months, they sometimes do not even agree with themselves. That is leading to increased use of neural networks, powerhouses for identifying and matching patterns of all sorts, to automate and improve decisions about whether two fingerprints come from the same person. A group of computer scientists decided to use neural networks to test the assumption that no two fingerprints are the same. Using twin neural networks, researchers from Columbia University, Tufts University, and the State University of New York (SUNY) University at Buffalo looked for similarities between different fingerprints in a database from the National Institute of Standards and Technology (NIST).
- North America > United States > New York (0.25)
- North America > United States > Michigan (0.05)
- North America > United States > California > Orange County > Irvine (0.05)
- Europe > Switzerland > Vaud > Lausanne (0.05)
America's Big AI Safety Plan Faces a Budget Crunch
US president Joe Biden's plan for containing the dangers of artificial intelligence already risks being derailed by congressional bean counters. A White House executive order on AI announced in October calls on the US to develop new standards for stress-testing AI systems to uncover their biases, hidden threats, and rogue tendencies. But the agency tasked with setting these standards, the National Institute of Standards and Technology (NIST), lacks the budget needed to complete that work independently by the July 26, 2024, deadline, according to several people with knowledge of the work. Speaking at the NeurIPS AI conference in New Orleans last week, Elham Tabassi, associate director for emerging technologies at NIST, described this as "an almost impossible deadline" for the agency. Some members of Congress have grown concerned that NIST will be forced to rely heavily on AI expertise from private companies that, due to their own AI projects, have a vested interest in shaping standards.
The transatlantic AI divide
Washington and Brussels are both preparing for a future dominated by artificial intelligence -- but first, they need to get out of each other's way. Tech regulators on both sides of the Atlantic hope to prevent a split on AI rules like one seen on data privacy, where regulators in Europe got out ahead of their U.S. counterparts and sparked all kinds of havoc that continue to threaten transatlantic data flows. "There is a lot of interest to avoid having segmented approaches," said Elham Tabassi, chief of staff in the Information Technology Laboratory at the National Institute of Standards and Technology. But regulators in the EU and U.S. are already taking different approaches to the multi-trillion-dollar transatlantic tech economy. The EU is plowing ahead with mandatory AI rules meant to safeguard privacy and civil rights while the U.S. focuses on voluntary guidelines.
- North America > United States (1.00)
- Europe (0.28)
- Law (1.00)
- Information Technology (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Trading (1.00)
How the US plans to manage artificial intelligence
US AI guidelines are everything the EU's AI Act is not: voluntary, non-prescriptive and focused on changing the culture of tech companies. As the EU's Artificial Intelligence (AI) Act fights its way through multiple rounds of revisions at the hands of MEPs, in the US a little-known organisation is quietly working up its own guidelines to help channel the development of such a promising and yet perilous technology. In March, the Maryland-based National Institute of Standards and Technology (NIST) released a first draft of its AI Risk Management Framework, which sets out a very different vision from the EU. The work is being led by Elham Tabassi, a computer vision researcher who joined the organisation just over 20 years ago. Then, "We built [AI] systems just because we could," she said.
- North America > United States > Maryland (0.25)
- North America > United States > Texas (0.05)
- North America > United States > Illinois (0.05)
A new type of powerful artificial intelligence could make EU's new law obsolete
The EU's proposed artificial intelligence act fails to fully take into account the recent rise of an ultra-powerful new type of AI, meaning the legislation will rapidly become obsolete as the technology is deployed in novel and unexpected ways. Foundation models trained on gargantuan amounts of data by the world's biggest tech companies, and then adapted to a wide range of tasks, are poised to become the infrastructure on which other applications are built. That means any deficits in these models will be inherited by all uses to which they are put. The fear is that foundation models could irreversibly embed security flaws, opacity and biases into AI. One study found that a model trained on online text replicated the prejudices of the internet, equating Islam with terrorism, a bias that could pop up unexpectedly if the model was used in education, for example.
- Instructional Material (0.56)
- Research Report (0.34)
- Government (1.00)
- Law > Statutes (0.69)
- Information Technology > Security & Privacy (0.68)
- Law Enforcement & Public Safety > Terrorism (0.49)
Global Big Data Conference
Bad actors use machine learning to break passwords more quickly and build malware that knows how to hide, experts warn. Three cybersecurity experts explained how artificial intelligence and machine learning can be used to evade cybersecurity defenses and make breaches faster and more efficient during a NCSA and Nasdaq cybersecurity summit. Kevin Coleman, the executive director of the National Cyber Security Alliance, hosted the conversation as part of Usable Security: Effecting and Measuring Change in Human Behavior on Tuesday, Oct. 6. Elham Tabassi, chief of staff information technology laboratory, National Institute of Standards and Technology, was one of the panelists in the "Artificial Intelligence and Machine Learning for Cybersecurity: The Good, the Bad, and the Ugly" session.text Attackers can use AI to evade detections, to hide where they can't be found, and automatically adapt to counter measures," Tabassi said.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.40)
3 ways criminals use artificial intelligence in cybersecurity attacks
Three cybersecurity experts explained how artificial intelligence and machine learning can be used to evade cybersecurity defenses and make breaches faster and more efficient during a NCSA and Nasdaq cybersecurity summit. Kevin Coleman, the executive director of the National Cyber Security Alliance, hosted the conversation as part of Usable Security: Effecting and Measuring Change in Human Behavior on Tuesday, Oct. 6. Elham Tabassi, chief of staff information technology laboratory, National Institute of Standards and Technology, was one of the panelists in the "Artificial Intelligence and Machine Learning for Cybersecurity: The Good, the Bad, and the Ugly" session.text "Attackers can use AI to evade detections, to hide where they can't be found, and automatically adapt to counter measures," Tabassi said. Tim Bandos, chief information security officer at Digital Guardian, said that cybersecurity will always need human minds to build strong defenses and stop attacks.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
NIST sets AI ground rules for agencies without 'stifling innovation' Federal News Network
Best listening experience is on Chrome, Firefox or Safari. As agencies continue to experiment with artificial intelligence as a tool to transform the way they do business, the National Institute of Standards and Technology has set a roadmap for the government's role in developing future AI breakthroughs. After months of feedback from industry and elsewhere in government, as well as an in-person workshop in May, NIST has laid down some ground rules of what agencies should and shouldn't do with AI tools going forward. NIST's plan marks the federal government's first major effort to provide clarity and guidance to agencies looking to adopt a technology that, while buzzworthy now, actually dates back to the 1960s, yet still remains in its infancy. Insight by Trezza Media Group: Labor Department, U.S. Marshals Service, SBA and VA address IT modernization in this free webinar.