neural block
CLEARER: Multi-Scale Neural Architecture Search for Image Restoration
Multi-scale neural networks have shown effectiveness in image restoration tasks, which are usually designed and integrated in a handcrafted manner. Different from the existing labor-intensive handcrafted architecture design paradigms, we present a novel method, termed as multi-sCaLe nEural ARchitecture sEarch for image Restoration (CLEARER), which is a specifically designed neural architecture search (NAS) for image restoration.
Ray-Tracing for Conditionally Activated Neural Networks
Gallicchio, Claudio, Nuti, Giuseppe
A BSTRACT In this paper, we introduce a novel architecture for conditionally activated neural networks combining a hierarchical construction of multiple Mixture of Experts (MoEs) layers with a sampling mechanism that progressively converges to an optimized configuration of expert activation. This methodology enables the dynamic unfolding of the network's architecture, facilitating efficient path-specific training. Experimental results demonstrate that this approach achieves competitive accuracy compared to conventional baselines while significantly reducing the parameter count required for inference. The approach we propose implements a neural network where blocks (experts) are stacked over multiple layers. By expressing each block's output as the expected firing rate of a stochastic calculation path, we can simultaneously solve the inference and the selective activation problems. Importantly, since we model every block's output to be its expected activation rate, initiating a computational path from the input nodes or from within a block in the middle of the network will yield comparable results, allowing for a variety of new computational approaches, balancing the width-versus depth-first paradigm.
- North America > United States (0.29)
- Europe > Italy (0.14)
CLEARER: Multi-Scale Neural Architecture Search for Image Restoration
Multi-scale neural networks have shown effectiveness in image restoration tasks, which are usually designed and integrated in a handcrafted manner. Different from the existing labor-intensive handcrafted architecture design paradigms, we present a novel method, termed as multi-sCaLe nEural ARchitecture sEarch for image Restoration (CLEARER), which is a specifically designed neural architecture search (NAS) for image restoration. On one hand, we design a multi-scale search space that consists of three task-flexible modules. Namely, 1) Parallel module that connects multi-resolution neural blocks in parallel, while preserving the channels and spatial-resolution in each neural block, 2) Transition module remains the existing multi-resolution features while extending them to a lower resolution, 3) Fusion module integrates multi-resolution features by passing the features of the parallel neural blocks to the current neural blocks. On the other hand, we present novel losses which could 1) balance the tradeoff between the model complexity and performance, which is highly expected to image restoration; and 2) relax the discrete architecture parameters into a continuous distribution which approximates to either 0 or 1. As a result, a differentiable strategy could be employed to search when to fuse or extract multi-resolution features, while the discretization issue faced by the gradient-based NAS could be alleviated.
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Systems & Languages > Problem-Independent Architectures (0.87)
Solving Falkner-Skan type equations via Legendre and Chebyshev Neural Blocks
Aghaei, Alireza Afzal, Parand, Kourosh, Nikkhah, Ali, Jaberi, Shakila
In this paper, a new deep-learning architecture for solving the non-linear Falkner-Skan equation is proposed. Using Legendre and Chebyshev neural blocks, this approach shows how orthogonal polynomials can be used in neural networks to increase the approximation capability of artificial neural networks. In addition, utilizing the mathematical properties of these functions, we overcome the computational complexity of the backpropagation algorithm by using the operational matrices of the derivative. The efficiency of the proposed method is carried out by simulating various configurations of the Falkner-Skan equation.
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > Singapore (0.04)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)