cyberinfrastructure
Toward Smart Scheduling in Tapis
Stubbs, Joe, Padhy, Smruti, Cardone, Richard
The Tapis framework provides APIs for automating job execution on remote resources, including HPC clusters and servers running in the cloud. Tapis can simplify the interaction with remote cyberinfrastructure (CI), but the current services require users to specify the exact configuration of a job to run, including the system, queue, node count, and maximum run time, among other attributes. Moreover, the remote resources must be defined and configured in Tapis before a job can be submitted. In this paper, we present our efforts to develop an intelligent job scheduling capability in Tapis, where various attributes about a job configuration can be automatically determined for the user, and computational resources can be dynamically provisioned by Tapis for specific jobs. We develop an overall architecture for such a feature, which suggests a set of core challenges to be solved. Then, we focus on one such specific challenge: predicting queue times for a job on different HPC systems and queues, and we present two sets of results based on machine learning methods. Our first set of results cast the problem as a regression, which can be used to select the best system from a list of existing options. Our second set of results frames the problem as a classification, allowing us to compare the use of an existing system with a dynamically provisioned resource.
- North America > United States > Texas > Travis County > Austin (0.15)
- North America > United States > Texas > Shelby County > Center (0.05)
Towards an Integrated Performance Framework for Fire Science and Management Workflows
Ahmed, H., Shende, R., Perez, I., Crawl, D., Purawat, S., Altintas, I.
Reliable performance metrics are necessary prerequisites to building large-scale end-to-end integrated workflows for collaborative scientific research, particularly within context of use-inspired decision making platforms with many concurrent users and when computing real-time and urgent results using large data. This work is a building block for the National Data Platform, which leverages multiple use-cases including the WIFIRE Data and Model Commons for wildfire behavior modeling and the EarthScope Consortium for collaborative geophysical research. This paper presents an artificial intelligence and machine learning (AI/ML) approach to performance assessment and optimization of scientific workflows. An associated early AI/ML framework spanning performance data collection, prediction and optimization is applied to wildfire science applications within the WIFIRE BurnPro3D (BP3D) platform for proactive fire management and mitigation.
- North America > United States > California > San Diego County > San Diego (0.07)
- North America > United States > California > San Diego County > La Jolla (0.05)
A2CI: A Cloud-based, Service-oriented Geospatial Cyberinfrastructure to Support Atmospheric Research
Li, Wenwen, Shao, Hu, Wang, Sizhe, Zhou, Xiran, Wu, Sheng
In recent years, atmospheric research has received increasing attention from environmental experts and the public because atmospheric phenomena such as El Nino, global warming, ozone depletion, and drought that may have negative effects on the Earth's climate and ecosystem are occurring more often (Walther et al. 2002; Karl and Trenberth 2003; Trenberth et al. 2014). In order to model the status quo and predict the trend of atmospheric phenomena and events, researchers need to retrieve data from various relevant domains, such as chemical components of aerosols and gases, the terrestrial surface, energy consumption, the hydrosphere, the biosphere, etc. (Schneider, 2006; Fowler et al., 2009; Guilyardi et al, 2009; Ramanathan et al., 2011; Katul et al., 2012). In complex earth system modeling, the data and services for atmospheric study present the characteristics of being distributed, collaborative and adaptive (Plale et al., 2006). The massive volume, rapid velocity and wide variety of data has led to a new era of atmospheric research that consists of accessing and integrating big data from distributed sources, conducting collaborative analysis in an interactive way, providing intelligent services for data management, and integration and visualization to foster discovery of hidden or new knowledge. One of the most important ways to support these activities is to establish a national or international spatial data infrastructure and geospatial cyberinfrastructure on which the data and computational resources can be easily shared, the spatial analysis tool can be executed on-the-fly and the scientific results can be effectively visualized (Yang et al., 2008; Li et al., 2011). Technically, a geospatial cyberinfrastructure (GCI) is an architecture that effectively utilizes geo-referenced data to connect people, information and computers based on the standardized data access protocols, high speed internet, high-performance computing facilities (HPC) and service-oriented data management (Yang et al., 2010). Since the concept's official introduction by the National Science Foundation (NSF) in its 2003 blue ribbon report, cyberinfrastructure research has attracted much attention from the atmospheric science domain because of its promise of bringing paradigm change for
- Pacific Ocean (0.05)
- Atlantic Ocean (0.05)
- Oceania > Australia (0.04)
- (9 more...)
- Information Technology > Information Management (1.00)
- Information Technology > Data Science (1.00)
- Information Technology > Communications > Web (1.00)
- (2 more...)
National Artificial Intelligence Research Resource Task Force Releases Final Report
Today, the National Artificial Intelligence Research Resource (NAIRR) Task Force released its final report, a roadmap for standing up a national research infrastructure that would broaden access to the resources essential to artificial intelligence (AI) research and development. While AI research and development (R&D) in the United States is advancing rapidly, opportunities to pursue cutting-edge AI research and new AI applications are often inaccessible to researchers beyond those at well-resourced companies, organizations, and academic institutions. A NAIRR would change that by providing AI researchers and students with significantly expanded access to computational resources, high-quality data, educational tools, and user support--fueling greater innovation and advancing AI that serves the public good. "AI advances hold tremendous promise for tackling our hardest problems and achieving our greatest aspirations," said Arati Prabhakar, OSTP Director and Assistant to the President for Science and Technology. "We will only realize this potential when many more kinds of researchers have access to the powerful capabilities that underpin AI advances."
- North America > United States (0.98)
- Europe > Ukraine (0.40)
NSF-led National Artificial Intelligence Research Resource Task Force Releases Final Report
Today, the National Artificial Intelligence Research Resource (NAIRR) Task Force released its final report, a roadmap for standing up a national research infrastructure that would democratize access to the resources essential to artificial intelligence (AI) research and development. Established by the National AI Initiative Act of 2020, the NAIRR Task Force is a federal advisory committee. Co-chaired by the U.S. National Science Foundation and the White House Office of Science and Technology Policy, the Task Force has equal representation from government, academia, and private organizations. Following its launch in June 2021, the Task Force embarked on a rigorous, open process that culminated in this final report. This process included 11 public meetings and two formal requests for information to gather public input.
Transforming Science through Cyberinfrastructure
Advanced cyberinfrastructure (CI) is critical to science and engineering (S&E) research. For example, over the past two years, CI resources (including those provided by the COVID-19 HPC Consortiuma) enabled research that dramatically accelerated efforts to understand, respond to, and mitigate near- and longer-term impacts of the novel coronavirus disease 2019 (COVID-19) pandemic.b Computer-based epidemiology models informed public policy in the U.S., and in countries throughout the world, and newly studied transmission models for the virus have been used to forecast resource availability and mortality stratified by age group at the county level.c Artificial intelligence and machine learning approaches accelerated drug screening to find candidate medicines from trillions of possible chemical compounds,d and differential gene expressions among COVID-19 patient populations have been analyzed with important implications for treatment planning.e Structural modeling of the virus has led to new insights, speeding the development of vaccines and antigens.
DK Panda
Imagine a world where a farmer's smart phone predicts the perfect day to harvest. Or a governor can dial up exactly how to enhance food security prior to a hurricane. It would take seamless access to a highly technical artificial intelligence (AI) infrastructure, but Ohio State's Dhabaleswar K. (DK) Panda is working to get us there. "If you look at AI, it's become very important but it is limited to only advanced technical people," said Panda, professor of computer science and engineering at Ohio State. "How do we take it to the masses? We want to create a plug-and-play AI that will be democratized so anybody can use it."
Artificial intelligence for the masses
It takes real intelligence and plenty of collaborative muscle to harness the potential of artificial intelligence. Most of us can barely grasp the concept of human-made machines learning how to process and analyze enormous amounts of data, then using that mass of information to understand things at new scales and in new combinations, delivering useful insights that our brains would never be able to produce on their own. Now University of Delaware Prof. Rudolf Eigenmann, interim chair of the Department of Computer and Information Sciences and professor of electrical and computer engineering, is playing a critical role in a new $20 million National Science Foundation-supported project designed to expand access to artificial intelligence. AI for the masses, you might call it. The project, called the NSF AI Institute for Intelligent Cyberinfrastructure with Computational Learning in the Environment (ICICLE), is one of 11 new National Artificial Intelligence Research Institutes the NSF announced recently. It is the second year of such investment by NSF.
- North America > United States > Texas > Travis County > Austin (0.16)
- Europe > Switzerland > Zürich > Zürich (0.16)
- North America > United States > Ohio (0.09)
- (5 more...)
Two IU Schools Part of $40M Artificial Intelligence Research
Indiana University will be a principal organization in two of the U.S. National Science Foundation's 11 new NSF National Artificial Intelligence Research Institutes, helping advance artificial intelligence to improve people's lives. Overall, the 11 institutes are focused on AI-based technologies that will result in advances such as helping older adults lead more independent lives, creating solutions to improve agriculture and food supply chains, and transforming AI into accessible "plug-and-play" technology, the NSF said in an announcement. The IU Luddy School of Informatics, Computing and Engineering and the Center of Excellence for Women and Technology, both based at IU Bloomington, will collaborate with the AI Institute for Intelligent Cyberinfrastructure with Computational Learning. Researchers from the IU School of Education at IU Bloomington and the Luddy School are part of the team for the NSF AI Institute for Engaged Learning, which will advance natural language processing, computer vision and machine learning to engage learners in AI-driven narrative-centered learning environments, particularly in STEM. Students will be engaged through story-based problem scenarios.
- North America > United States > Indiana (0.25)
- North America > United States > North Carolina (0.06)
- Education (1.00)
- Food & Agriculture > Agriculture (0.37)
Machine learning aids earthquake risk prediction
Our homes and offices are only as solid as the ground beneath them. When that solid ground turns to liquid--as sometimes happens during earthquakes--it can topple buildings and bridges. This phenomenon is known as liquefaction, and it was a major feature of the 2011 earthquake in Christchurch, New Zealand, a magnitude 6.3 quake that killed 185 people and destroyed thousands of homes. An upside of the Christchurch quake was that it was one of the most well-documented in history. Because New Zealand is seismically active, the city was instrumented with numerous sensors for monitoring earthquakes.
- Oceania > New Zealand > South Island > Canterbury Region > Christchurch (0.25)
- North America > United States > California > Los Angeles County > Los Angeles (0.15)