Goto

Collaborating Authors

 autograd


DifferentiableMultipleShootingLayers SupplementaryMaterial

Neural Information Processing Systems

Let φθ(z,s,t) be the solution of (2.1). In this paper,we propose to either use the forward sensitivity approach ofProposition 1ortorelyonthezeroth-order approximation ofparareal. Interpolation is used to obtain values ofz(t)without a full backsolve from z(T). C.5 BroaderImpact Differential equations are the language of science and engineering. We consider a parametrizationu,θ with parametersθ of the boundary controllerπ via a multi-layerperceptron.


InformedNeural

Neural Information Processing Systems

Additionally,though traditional PINNs (vanilla-PINNs) are typically stored andtrained in32-bit floating-point (fp32) ontheGPU, weshow that for DT-PINNs, using fp64 on the GPU leads to significantly faster training times than fp32 vanilla-PINNs with comparable accuracy. PINNscanbeusedboth to discover/infer PDEs that govern a given data set, and as direct PDE solvers.


Enabling Automatic Differentiation with Mollified Graph Neural Operators

Lin, Ryan Y., Berner, Julius, Duruisseaux, Valentin, Pitt, David, Leibovici, Daniel, Kossaifi, Jean, Azizzadenesheli, Kamyar, Anandkumar, Anima

arXiv.org Artificial Intelligence

Physics-informed neural operators offer a powerful framework for learning solution operators of partial differential equations (PDEs) by combining data and physics losses. However, these physics losses rely on derivatives. Computing these derivatives remains challenging, with spectral and finite difference methods introducing approximation errors due to finite resolution. Here, we propose the mollified graph neural operator ($m$GNO), the first method to leverage automatic differentiation and compute exact gradients on arbitrary geometries. This enhancement enables efficient training on irregular grids and varying geometries while allowing seamless evaluation of physics losses at randomly sampled points for improved generalization. For a PDE example on regular grids, $m$GNO paired with autograd reduced the L2 relative data error by 20x compared to finite differences, although training was slower. It can also solve PDEs on unstructured point clouds seamlessly, using physics losses only, at resolutions vastly lower than those needed for finite differences to be accurate enough. On these unstructured point clouds, $m$GNO leads to errors that are consistently 2 orders of magnitude lower than machine learning baselines (Meta-PDE, which accelerates PINNs) for comparable runtimes, and also delivers speedups from 1 to 3 orders of magnitude compared to the numerical solver for similar accuracy. $m$GNOs can also be used to solve inverse design and shape optimization problems on complex geometries.


Accelerated Training of Physics-Informed Neural Networks (PINNs) using Meshless Discretizations

Neural Information Processing Systems

Additionally, though traditional PINNs (vanilla-PINNs) are typically stored and trained in 32-bit floating-point (fp32) on the GPU, we show that for DT -PINNs, using fp64 on the GPU leads to significantly faster training times than fp32 vanilla-PINNs with comparable accuracy.


Gradient-based Fuzzy System Optimisation via Automatic Differentiation -- FuzzyR as a Use Case

Chen, Chao, Wagner, Christian, Garibaldi, Jonathan M.

arXiv.org Artificial Intelligence

Since their introduction, fuzzy sets and systems have become an important area of research known for its versatility in modelling, knowledge representation and reasoning, and increasingly its potential within the context explainable AI. While the applications of fuzzy systems are diverse, there has been comparatively little advancement in their design from a machine learning perspective. In other words, while representations such as neural networks have benefited from a boom in learning capability driven by an increase in computational performance in combination with advances in their training mechanisms and available tool, in particular gradient descent, the impact on fuzzy system design has been limited. In this paper, we discuss gradient-descent-based optimisation of fuzzy systems, focussing in particular on automatic differentiation -- crucial to neural network learning -- with a view to free fuzzy system designers from intricate derivative computations, allowing for more focus on the functional and explainability aspects of their design. As a starting point, we present a use case in FuzzyR which demonstrates how current fuzzy inference system implementations can be adjusted to leverage powerful features of automatic differentiation tools sets, discussing its potential for the future of fuzzy system design.


Leveraging AI to Advance Science and Computing Education across Africa: Progress, Challenges, and Opportunities

Boateng, George

arXiv.org Artificial Intelligence

Across the African continent, students grapple with various educational challenges, including limited access to essential resources such as computers, internet connectivity, reliable electricity, and a shortage of qualified teachers. Despite these challenges, recent advances in AI such as BERT, and GPT-4 have demonstrated their potential for advancing education. Yet, these AI tools tend to be deployed and evaluated predominantly within the context of Western educational settings, with limited attention directed towards the unique needs and challenges faced by students in Africa. In this book chapter, we describe our works developing and deploying AI in Education tools in Africa: (1) SuaCode, an AI-powered app that enables Africans to learn to code using their smartphones, (2) AutoGrad, an automated grading, and feedback tool for graphical and interactive coding assignments, (3) a tool for code plagiarism detection that shows visual evidence of plagiarism, (4) Kwame, a bilingual AI teaching assistant for coding courses, (5) Kwame for Science, a web-based AI teaching assistant that provides instant answers to students' science questions and (6) Brilla AI, an AI contestant for the National Science and Maths Quiz competition. We discuss challenges and potential opportunities to use AI to advance science and computing education across Africa.


HOPE: High-order Polynomial Expansion of Black-box Neural Networks

Xiao, Tingxiong, Zhang, Weihang, Cheng, Yuxiao, Suo, Jinli

arXiv.org Artificial Intelligence

Despite their remarkable performance, deep neural networks remain mostly ``black boxes'', suggesting inexplicability and hindering their wide applications in fields requiring making rational decisions. Here we introduce HOPE (High-order Polynomial Expansion), a method for expanding a network into a high-order Taylor polynomial on a reference input. Specifically, we derive the high-order derivative rule for composite functions and extend the rule to neural networks to obtain their high-order derivatives quickly and accurately. From these derivatives, we can then derive the Taylor polynomial of the neural network, which provides an explicit expression of the network's local interpretations. Numerical analysis confirms the high accuracy, low computational complexity, and good convergence of the proposed method. Moreover, we demonstrate HOPE's wide applications built on deep learning, including function discovery, fast inference, and feature selection. The code is available at https://github.com/HarryPotterXTX/HOPE.git.


SHoP: A Deep Learning Framework for Solving High-order Partial Differential Equations

Xiao, Tingxiong, Yang, Runzhao, Cheng, Yuxiao, Suo, Jinli, Dai, Qionghai

arXiv.org Artificial Intelligence

Solving partial differential equations (PDEs) has been a fundamental problem in computational science and of wide applications for both scientific and engineering research. Due to its universal approximation property, neural network is widely used to approximate the solutions of PDEs. However, existing works are incapable of solving high-order PDEs due to insufficient calculation accuracy of higher-order derivatives, and the final network is a black box without explicit explanation. To address these issues, we propose a deep learning framework to solve high-order PDEs, named SHoP. Specifically, we derive the high-order derivative rule for neural network, to get the derivatives quickly and accurately; moreover, we expand the network into a Taylor series, providing an explicit solution for the PDEs. We conduct experimental validations four high-order PDEs with different dimensions, showing that we can solve high-order PDEs efficiently and accurately.


PyTorch 2.0正式版来了!

#artificialintelligence

在PyTorch Conference 2022上,研发团队介绍了 PyTorch 2.0,并宣布稳定版本将在今年 3 月正式发布,现在 PyTorch 2.0 正式版如期而至。


Unsupervised physics-informed neural network in reaction-diffusion biology models (Ulcerative colitis and Crohn's disease cases) A preliminary study

Rebai, Ahmed, Boukhris, Louay, Toujani, Radhi, Gueddiche, Ahmed, Banna, Fayad Ali, Souissi, Fares, Lasram, Ahmed, Rayana, Elyes Ben, Zaag, Hatem

arXiv.org Artificial Intelligence

We propose to explore the potential of physics-informed neural networks (PINNs) in solving a class of partial differential equations (PDEs) used to model the propagation of chronic inflammatory bowel diseases, such as Crohn's disease and ulcerative colitis. An unsupervised approach was privileged during the deep neural network training. Given the complexity of the underlying biological system, characterized by intricate feedback loops and limited availability of high-quality data, the aim of this study is to explore the potential of PINNs in solving PDEs. In addition to providing this exploratory assessment, we also aim to emphasize the principles of reproducibility and transparency in our approach, with a specific focus on ensuring the robustness and generalizability through the use of artificial intelligence. We will quantify the relevance of the PINN method with several linear and non-linear PDEs in relation to biology. However, it is important to note that the final solution is dependent on the initial conditions, chosen boundary conditions, and neural network architectures.