Goto

Collaborating Authors

 network solution


A cusp-capturing PINN for elliptic interface problems

Tseng, Yu-Hau, Lin, Te-Sheng, Hu, Wei-Fan, Lai, Ming-Chih

arXiv.org Artificial Intelligence

In this paper, we propose a cusp-capturing physics-informed neural network (PINN) to solve discontinuous-coefficient elliptic interface problems whose solution is continuous but has discontinuous first derivatives on the interface. To find such a solution using neural network representation, we introduce a cusp-enforced level set function as an additional feature input to the network to retain the inherent solution properties; that is, capturing the solution cusps (where the derivatives are discontinuous) sharply. In addition, the proposed neural network has the advantage of being mesh-free, so it can easily handle problems in irregular domains. We train the network using the physics-informed framework in which the loss function comprises the residual of the differential equation together with certain interface and boundary conditions. We conduct a series of numerical experiments to demonstrate the effectiveness of the cusp-capturing technique and the accuracy of the present network model. Numerical results show that even using a one-hidden-layer (shallow) network with a moderate number of neurons and sufficient training data points, the present network model can achieve prediction accuracy comparable with traditional methods. Besides, if the solution is discontinuous across the interface, we can simply incorporate an additional supervised learning task for solution jump approximation into the present network without much difficulty.


Analyzing Cross-Connected Networks

Neural Information Processing Systems

The non-linear complexities of neural networks make network solutions difficult to understand. Sanger's contribution analysis is here extended to the analysis of networks automatically generated by the cascade(cid:173) correlation learning algorithm. Because such networks have cross connections that supersede hidden layers, standard analyses of hidden unit activation patterns are insufficient. A contribution is defined as the product of an output weight and the associated activation on the sending unit, whether that sending unit is an input or a hidden unit, multiplied by the sign of the output target for the current input pattern. Intercorrelations among contributions, as gleaned from the matrix of contributions x input patterns, can be subjected to principal components analysis (PCA) to extract the main features of variation in the contributions.


Samsung Offers Guide To Help Enterprises Build Private 5G Networks Best Fit for Their Business

#artificialintelligence

Samsung Electronics today released the second edition of its private 5G networks whitepaper, highlighting the architectures, features and benefits of private 5G networks for industrial scenarios--such as smart factories, smart hospitals, smart logistics and transportation, among others. With the growing interest in private networks, Samsung explores how enterprises can successfully deploy private 5G networks to meet business goals and service demands. The whitepaper outlines various architectural options for building private networks that enable 5G services -- such as Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC) and Massive Machine Type Communications (mMTC) -- which can bring new innovation to a range of sectors rapidly transitioning to Industry 4.0. The paper spotlights Samsung's complete set of private 5G network solutions, which enable enterprises to simplify network deployment and operation. With a portfolio and capability to build highly reliable private 5G networks, Samsung offers solutions for small, medium to large-scale enterprises.


Deep Energy: Using Energy Functions for Unsupervised Training of DNNs

Golts, Alona, Freedman, Daniel, Elad, Michael

arXiv.org Machine Learning

The success of deep learning has been due in no small part to the availability of large annotated datasets. Thus, a major bottleneck in the current learning pipeline is the human annotation of data, which can be quite time consuming. For a given problem setting, we aim to circumvent this issue via the use of an externally specified energy function appropriate for that setting; we call this the "Deep Energy" approach. We show how to train a network on an entirely unlabelled dataset using such an energy function, and apply this general technique to learn CNNs for two specific tasks: seeded segmentation and image matting. Once the network parameters have been learned, we obtain a high-quality solution in a fast feed-forward style, without the need to repeatedly optimize the energy function for each image.


Constraints on Adaptive Networks for Modeling Human Generalization

Gluck, Mark A., Pavel, M., Henkle, Van

Neural Information Processing Systems

CA 94305 ABSTRACT The potential of adaptive networks to learn categorization rules and to model human performance is studied by comparing how natural and artificial systems respond to new inputs, i.e., how they generalize. Like humans, networks can learn a detenninistic categorization task by a variety of alternative individual solutions. An analysis of the constraints imposed by using networks with the minimal number of hidden units shows that this "minimal configuration" constraint is not sufficient A further analysis of human and network generalizations indicates that initial conditions may provide important constraints on generalization. A new technique, which we call "reversed learning", is described for finding appropriate initial conditions. INTRODUCTION We are investigating the potential of adaptive networks to learn categorization tasks and to model human performance.


Constraints on Adaptive Networks for Modeling Human Generalization

Gluck, Mark A., Pavel, M., Henkle, Van

Neural Information Processing Systems

CA 94305 ABSTRACT The potential of adaptive networks to learn categorization rules and to model human performance is studied by comparing how natural and artificial systems respond to new inputs, i.e., how they generalize. Like humans, networks can learn a detenninistic categorization task by a variety of alternative individual solutions. An analysis of the constraints imposed by using networks with the minimal number of hidden units shows that this "minimal configuration" constraint is not sufficient A further analysis of human and network generalizations indicates that initial conditions may provide important constraints on generalization. A new technique, which we call "reversed learning", is described for finding appropriate initial conditions. INTRODUCTION We are investigating the potential of adaptive networks to learn categorization tasks and to model human performance.


Constraints on Adaptive Networks for Modeling Human Generalization

Gluck, Mark A., Pavel, M., Henkle, Van

Neural Information Processing Systems

CA 94305 ABSTRACT The potential of adaptive networks to learn categorization rules and to model human performance is studied by comparing how natural and artificial systems respond to new inputs, i.e., how they generalize. Like humans, networks can learn a detenninistic categorization task by a variety of alternative individual solutions. An analysis of the constraints imposedby using networks with the minimal number of hidden units shows that this "minimal configuration" constraint is not sufficient A further analysis of human and network generalizations indicates that initial conditions may provide important constraints on generalization. A new technique, which we call "reversed learning", is described for finding appropriate initial conditions. INTRODUCTION We are investigating the potential of adaptive networks to learn categorization tasks and to model human performance.