Goto

Collaborating Authors

 computational modeling


Can Artificial Intelligence Accelerate Technological Progress? Researchers' Perspectives on AI in Manufacturing and Materials Science

Nelson, John P., Olugbade, Olajide, Shapira, Philip, Biddle, Justin B.

arXiv.org Artificial Intelligence

Applications of artificial intelligence or machine learning in research Modes of use Surrogate modeling for physics - based models Modeling of poorly understood phenomena Data preprocessing Large language model use Applications AI/ML as research tool Production process design, monitoring, & output prediction Part design & properties prediction Materials design & properties prediction AI/ML as research product Generative AI design tool for consumers Generic research tasks Large language models for coding Large language models for literature review Benefits of artificial intelligence or machine learning in research Reduction in accuracy/cost/speed trade - off in research, especially computer modeling Reduced computation time Replacing experimentation Reducing need for computationally intensive, physics - based models Saving research labor Exploring larger design spaces Address of previously unsolvable problems Model poorly understood relationships between variables Identify human - unidentifiable patterns or phenomena Downsides of artificial intelligence or machine learning in research Accuracy weaknesses Predict poorly outside regions of dense, high - quality training data Interpretability weaknesses Bounds of accuracy can be unclear Accuracy assessment can be difficult Long - run scientific progress concerns AI/ML cannot develop novel scientific theory AI/ML may bypass opportunities to identify empirical or theoretical novelties Resource issues Data acquisition and cleaning is time - intensive AI/ML models are computation - and energy - intensive to develop Inappropriate use issues Easy to over - trust May be inappropriately used to address problems soluble with simpler methods 8 Second, AI/ML models can be trained on input and output data for phenomena (e.g., complex production processes) which lack robust theoretical models, developing novel predictive capabilities in the absence of explicit, human - designed theory. This is somet imes referred to as "phenomenological modeling," as it attempts to model phenomena in the absence of mechanistic, explanatory understanding: [T]he first reason we choose to use AI is because we don't have a good model of what our system is. . . I get a bunch of data coming in and I have a bunch of sensor readings, you know. . . And I use the AI to map the bunch of sensor readings to the process health or process status or machine status that I have.


Reviews: Perceiving the arrow of time in autoregressive motion

Neural Information Processing Systems

Originality: To the best of my knowledge, conducting human psychophysics and comparing their performance to computational models has not been done for the precise problem formulation examined by the authors. However, the authors did not describe previous work on anorthoscopic perception, which has examined similar question (how do people perceive a figure that is revealed to them through a slit moving over it over time? Is the constructed perception equivalent in each order?) and has a long history dating back to Helmholtz. For a good review, see Rock, I. (1981). Quality: The human experiments and computational modeling are well conducted.


Computational Modelling of Quantifier Use: Corpus, Models, and Evaluation

Chen, Guanyi (a:1:{s:5:"en_US";s:18:"Utrecht University";}) | van Deemter, Kees

Journal of Artificial Intelligence Research

A prominent strand of work in formal semantics investigates the ways in which human languages quantify the elements of a set, as when we say All A are B, Few A are B, and so on. Building on a growing body of empirical studies that shed light on the meaning and the use of quantifiers, we extend this line of work by computationally modelling how human speakers textually describe complex scenes in which quantitative relations play an important role. To this end, we conduct a series of elicitation experiments in which human speakers were asked to perform a linguistic task that invites the use of quantified expressions. The experiments result in a corpus, called qtuna, made up of short texts that contain a large variety of quantified expressions. We analyse qtuna, summarise our findings, and explain how we design computational models of human quantifier use accordingly. Finally, we evaluate these models in accordance with qtuna .


Computational modeling of semantic change

Tahmasebi, Nina, Dubossarsky, Haim

arXiv.org Artificial Intelligence

In this chapter we provide an overview of computational modeling for semantic change using large and semi-large textual corpora. We aim to provide a key for the interpretation of relevant methods and evaluation techniques, and also provide insights into important aspects of the computational study of semantic change. We discuss the pros and cons of different classes of models with respect to the properties of the data from which one wishes to model semantic change, and which avenues are available to evaluate the results.


Challenges and opportunities for machine learning in multiscale computational modeling

Nguyen, Phong C. H., Choi, Joseph B., Udaykumar, H. S., Baek, Stephen

arXiv.org Artificial Intelligence

Abstract: Many mechanical engineering applications call for multiscale computational modeling and simulation. However, solving for complex multiscale systems remains computationally onerous due to the high dimensionality of the solution space. Recently, machine learning (ML) has emerged as a promising solution that can either serve as a surrogate for, accelerate or augment traditional numerical methods. Pioneering work has demonstrated that ML provides solutions to governing systems of equations with comparable accuracy to those obtained using direct numerical methods, but with significantly faster computational speed. These high-speed, high-fidelity estimations can facilitate the solving of complex multiscale systems by providing a better initial solution to traditional solvers. This paper provides a perspective on the opportunities and challenges of using ML for complex multiscale modeling and simulation. We first outline the current state-of-the-art ML approaches for simulating multiscale systems and highlight some of the landmark developments. Next, we discuss current challenges for ML in multiscale computational modeling, such as the data and discretization dependence, interpretability, and data sharing and collaborative platform development. Finally, we suggest several potential research directions for the future. Keywords: Machine learning, Artificial intelligence, Computational modeling, Multiscale modeling 1 Introduction Multiscale computational modeling has emerged as a central part of many mechanical engineering applications in recent years.


Mining the right transition metals in a vast chemical space

#artificialintelligence

Swift and significant gains against climate change require the creation of novel, environmentally benign, and energy-efficient materials. One of the richest veins researchers hope to tap in creating such useful compounds is a vast chemical space where molecular combinations that offer remarkable optical, conductive, magnetic, and heat transfer properties await discovery. But finding these new materials has been slow going. "While computational modeling has enabled us to discover and predict properties of new materials much faster than experimentation, these models aren't always trustworthy," says Heather J. Kulik PhD '09, associate professor in the departments of Chemical Engineering and Chemistry. "In order to accelerate computational discovery of materials, we need better methods for removing uncertainty and making our predictions more accurate."


Stanford Releases Report on the Current State of AI

#artificialintelligence

Artificial intelligence (AI) has significantly advanced in the past half decade and is making major inroads across many industries and sectors worldwide. Earlier this month, Stanford University released The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report. The new Stanford AI100 report is the second in a series following the inaugural AI100 report published five years ago in September 2016. Stanford plans to continue to publish the A1100 report once every five years for a hundred years or longer. "The field of artificial intelligence has made remarkable progress in the past five years and is having real-world impact on people, institutions and culture," the researchers wrote.


Think Complexity, 2nd Edition - Programmer Books

#artificialintelligence

Complexity science uses computation to explore the physical and social sciences. In Think Complexity, you'll use graphs, cellular automata, and agent-based models to study topics in physics, biology, and economics. Whether you're an intermediate-level Python programmer or a student of computational modeling, you'll delve into examples of complex systems through a series of worked examples, exercises, case studies, and easy-to-understand explanations. Ideal as a text for a course on computational modeling in Python, Think Complexity also helps self-learners gain valuable experience with topics and ideas they might not encounter otherwise.


Computational Models of Narrative: Review of the Workshop

AI Magazine

On October 8-10, 2009, an interdisciplinary group met in Beverley, Massachusetts, to evaluate the state of the art in the computational modeling of narrative. Three important findings emerged: (1) current work in computational modeling is described by three different levels of representation; (2) there is a paucity of studies at the highest, most abstract level aimed at inferring the meaning or message of the narrative; and (3) there is a need to establish a standard data bank of annotated narratives, analogous to the Penn Treebank. We use them to entertain, communicate, convince, and explain. One workshop participant noted that "as far as I know, every society in the world has stories, which suggests they have a psychological basis, that stories do something for you." To truly understand and explain human intelligence, reasoning, and beliefs, we need to understand why narrative is universal and explain the function it serves. Computational modeling is a natural method for investigating narrative. As a complex cognitive phenomenon, narrative touches on many areas that have traditionally been of interest to artificial intelligence researchers: its different facets draw on our capacities for natural language understanding and generation, commonsense reasoning, analogical reasoning, planning, physical perception (through imagination), and social cognition. Successful modeling will undoubtedly require researchers from these many perspectives and more, using a multitude of different techniques from the AI toolkit, ranging from, for example, detailed symbolic knowledge representation to largescale statistical analyses. The relevance of AI to narrative, and vice versa, is compelling.


Robustness, Adaptivity, and Resiliency Analysis

Bankes, Steven Carl (BAE Systems)

AAAI Conferences

In order to better understand the mechanisms that lead to resiliency in natural systems, to support decisions that lead to greater resiliency in systems we effect, and to create models that will utilized in highly resilient systems, methods for resiliency analysis will be required. Existing methods and technology for robustness analysis provide a foundation for a rigorous approach to resiliency analysis, but extensions are necessary to address the multiple time scales that must be modeled to understand highly adaptive systems. Further, if resiliency modeling is to be effective, it must be contextualized, requiring that the supporting software will need to mirror the systems being modeling by being pace layered and adaptive.