Goto

Collaborating Authors

 ostp


Swarm Characteristic Classification using Robust Neural Networks with Optimized Controllable Inputs

Peltier, Donald W. III, Kaminer, Isaac, Clark, Abram, Orescanin, Marko

arXiv.org Artificial Intelligence

Having the ability to infer characteristics of autonomous agents would profoundly revolutionize defense, security, and civil applications. Our previous work was the first to demonstrate that supervised neural network time series classification (NN TSC) could rapidly predict the tactics of swarming autonomous agents in military contexts, providing intelligence to inform counter-maneuvers. However, most autonomous interactions, especially military engagements, are fraught with uncertainty, raising questions about the practicality of using a pretrained classifier. This article addresses that challenge by leveraging expected operational variations to construct a richer dataset, resulting in a more robust NN with improved inference performance in scenarios characterized by significant uncertainties. Specifically, diverse datasets are created by simulating variations in defender numbers, defender motions, and measurement noise levels. Key findings indicate that robust NNs trained on an enriched dataset exhibit enhanced classification accuracy and offer operational flexibility, such as reducing resources required and offering adherence to trajectory constraints. Furthermore, we present a new framework for optimally deploying a trained NN by the defenders. The framework involves optimizing defender trajectories that elicit adversary responses that maximize the probability of correct NN tactic classification while also satisfying operational constraints imposed on the defenders.


Methane projections from Canada's oil sands tailings using scientific deep learning reveal significant underestimation

Saha, Esha, Wang, Oscar, Chakraborty, Amit K., Garcia, Pablo Venegas, Milne, Russell, Wang, Hao

arXiv.org Machine Learning

Bitumen extraction for the production of synthetic crude oil in Canada's Athabasca Oil Sands industry has recently come under spotlight for being a significant source of greenhouse gas emission. A major cause of concern is methane, a greenhouse gas produced by the anaerobic biodegradation of hydrocarbons in oil sands residues, or tailings, stored in settle basins commonly known as oil sands tailing ponds. In order to determine the methane emitting potential of these tailing ponds and have future methane projections, we use real-time weather data, mechanistic models developed from laboratory controlled experiments, and industrial reports to train a physics constrained machine learning model. Our trained model can successfully identify the directions of active ponds and estimate their emission levels, which are generally hard to obtain due to data sampling restrictions. We found that each active oil sands tailing pond could emit between 950 to 1500 tonnes of methane per year, whose environmental impact is equivalent to carbon dioxide emissions from at least 6000 gasoline powered vehicles. Although abandoned ponds are often presumed to have insignificant emissions, our findings indicate that these ponds could become active over time and potentially emit up to 1000 tonnes of methane each year. Taking an average over all datasets that was used in model training, we estimate that emissions around major oil sands regions would need to be reduced by approximately 12% over a year, to reduce the average methane concentrations to 2005 levels.


White House reveals its next steps towards 'responsible' AI development

Engadget

The White House has made responsible AI development a focus of this administration in recent months, releasing a Blueprint AI Bill of Rights, developing a risk management framework, committing $140 million to found seven new National Academies dedicated to AI research and weighing in on how private enterprises are leveraging the technology. On Tuesday, the executive branch announced its next steps towards that goal including releasing an update to the National AI R&D Strategic Plan for the first time since 2019 as well as issuing a request for public input on critical AI issues. The Department of Education also dropped its hotly-anticipated report on the effects and risks of AI for students. The OSTP's National AI R&D Strategic Plan, which guides the federal government's investments in AI research, hadn't been updated since the Trump Administration (when he gutted the OSTP staffing levels). The plan seeks to promote responsible innovation in the field that serves the public good without infringing on the public's rights, safety and democratic values, having done so until this point through eight core strategies.


Biden makes 'equity,' civil rights a top priority in development of 'responsible' AI

FOX News

The Biden administration on Tuesday sought input from the public on how to ensure artificial intelligence develops in a way that supports "equity" and civil rights and helps "underserved communities," as part of a broader plan to promote "responsible" AI. The White House Office of Science and Technology Policy (OSTP) announced it is seeking input from any interested party on how to reach these and other goals as AI systems are developed. Policymakers and AI developers are increasingly in agreement on the need for federal rules, and possibly even a new federal agency, to ensure the risks of AI are managed. To inform this work, OSTP asked a series of questions on how to protect people's rights and safety as AI systems become more widely used, as well as questions related to "advancing equity and strengthening civil rights. HERE'S HOW AI IS BEING USED TO UNLOCK SECRETS STILL HIDDEN IN THE HUMAN BRAIN President Biden on Tuesday released a new plan for government research into AI, and the White House Office of Science and Technology Policy is asking how to make sure AI boosts'equity.' (Photo by Drew Angerer/Getty Images) "What are the opportunities for AI to enhance equity and how can these be fostered?" "For example, what are the potential benefits for AI in enabling broadened prosperity, expanding economic and educational opportunity, increasing access to services, and advancing civil rights?


Who Is Responsible Around Here?

Communications of the ACM

I reiterated Bill Joy's 2000 question: Does the future need us? Little did I know then that a revolution was already brewing. By 2011, GPUs had accelerated considerably the training of deep neural networks, finally making a technology whose roots go back to the early 1940sb competitive. By 2011–2012, AlexNet, a deep neural network, won several international competitions, launching the deep-learning revolution. A decade later, generative AI, which refers to AI that can generate novel content rather than simply analyze or act on existing data, has become all the rage.


National Artificial Intelligence Research Resource Task Force Releases Final Report

#artificialintelligence

Today, the National Artificial Intelligence Research Resource (NAIRR) Task Force released its final report, a roadmap for standing up a national research infrastructure that would broaden access to the resources essential to artificial intelligence (AI) research and development. While AI research and development (R&D) in the United States is advancing rapidly, opportunities to pursue cutting-edge AI research and new AI applications are often inaccessible to researchers beyond those at well-resourced companies, organizations, and academic institutions. A NAIRR would change that by providing AI researchers and students with significantly expanded access to computational resources, high-quality data, educational tools, and user support--fueling greater innovation and advancing AI that serves the public good. "AI advances hold tremendous promise for tackling our hardest problems and achieving our greatest aspirations," said Arati Prabhakar, OSTP Director and Assistant to the President for Science and Technology. "We will only realize this potential when many more kinds of researchers have access to the powerful capabilities that underpin AI advances."


White House Blueprint is the Starting Point for Building Responsible AI - Nextgov

#artificialintelligence

Late last year, White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights, instantly elevating the topic of responsible AI to the top of leadership agendas across executive branch agencies. While the themes of the blueprint are not entirely new--building on prior work including the AI in Government Act of 2020, a December 2020 executive order on trustworthy AI, and the Federal Privacy Council's Fair Information Practice Principles--the report brings new urgency to ongoing agency efforts to leverage data in ways consistent with our democratic ideals. With a stated goal of supporting "the development of policies and practices that protect civil rights and promote democratic values in the building, deployment and governance of automated systems," the blueprint is rooted in five principles: safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration and fallback. The Blueprint also includes notes on applying the principles and a technical companion to support operationalization. Some agencies that are less mature in their data capabilities might consider the blueprint to be of limited relevance.


Artificial Intelligence: Is it safe?

#artificialintelligence

This month, the White House Office of Science and Technology Policy (OSTP) issued a "Blueprint for an AI Bill of Rights." The document maps out areas in which artificial intelligence (AI) might be a threat to our existing rights and establishes a set of rights that individuals should expect to have as they use these emerging technologies. The OSTP blueprint sends two messages. First, it acknowledges that AI is affecting -- and likely will transform -- everything: changing medical practices, businesses, how we buy products and how we interact with each other. Second, it highlights the fact that these technologies, while transformational, can also be harmful to people at an individual, group and societal scale, with potential to extend and amplify discriminatory practices, violations of privacy or systems that are neither safe nor effective.


3 things the AI Bill of Rights does (and 3 things it doesn't)

#artificialintelligence

Did you miss a session from MetaBeat 2022? Head over to the on-demand library for all of our featured sessions here. Expectations were high when the White House released its Blueprint for an AI Bill of Rights on Tuesday. Developed by the White House Office of Science and Technology Policy (OSTP), the blueprint is a non-binding document that outlines five principles that should guide the design, use and deployment of automated systems, as well as technical guidance toward implementing the principles, including recommended action for a variety of federal agencies. For many, high expectations for dramatic change led to disappointment, including criticism that the AI Bill of Rights is "toothless" against artificial intelligence (AI) harms caused by big tech companies and is just a "white paper."


White House proposes voluntary safety and transparency rules around AI

#artificialintelligence

The White House this morning unveiled what it's colloquially calling an "AI Bill of Rights," which aims to establish tenets around the ways AI algorithms should be deployed as well as guardrails on their applications. In five bullet points crafted with feedback from the public, companies like Microsoft and Palantir and human rights and AI ethics groups, the document lays out safety, transparency and privacy principles that the Office of Science & Technology Policy (OSTP) -- which drafted the AI Bill of Rights -- argues will lead to better outcomes while mitigating harmful real-life consequences. The AI Bill of Rights mandates that AI systems be proven safe and effective through testing and consultation with stakeholders, in addition to continuous monitoring of the systems in production. It explicitly calls out algorithmic discrimination, saying that AI systems should be designed to protect both communities and individuals from biased decision-making. And it strongly suggests that users should be able to opt out of interactions with an AI system if they choose, for example in the event of a system failure.