Goto

Collaborating Authors

Optimize TSK Fuzzy Systems for Big Data Regression Problems: Mini-Batch Gradient Descent with Regularization, DropRule and AdaBound (MBGD-RDA)

arXiv.org Artificial Intelligence

Takagi-Sugeno-Kang (TSK) fuzzy systems are very useful machine learning models for regression problems. However, to our knowledge, there has not existed an efficient and effective training algorithm that enables them to deal with big data. Inspired by the connections between TSK fuzzy systems and neural networks, we extend three powerful neural network optimization techniques, i.e., mini-batch gradient descent, regularization, and AdaBound, to TSK fuzzy systems, and also propose a novel DropRule technique specifically for training TSK fuzzy systems. Our final algorithm, mini-batch gradient descent with regularization, DropRule and AdaBound (MBGD-RDA), can achieve fast convergence in training TSK fuzzy systems, and also superior generalization performance in testing. It can be used for training TSK fuzzy systems on datasets of any size; however, it is particularly useful for big datasets, on which currently no other efficient training algorithms exist.


Backpropagation-Free Learning Method for Correlated Fuzzy Neural Networks

arXiv.org Artificial Intelligence

In this paper, a novel stepwise learning approach based on estimating desired premise parts' outputs by solving a constrained optimization problem is proposed. This learning approach does not require backpropagating the output error to learn the premise parts' parameters. Instead, the near best output values of the rules premise parts are estimated and their parameters are changed to reduce the error between current premise parts' outputs and the estimated desired ones. Therefore, the proposed learning method avoids error backpropagation, which lead to vanishing gradient and consequently getting stuck in a local optimum. The proposed method does not need any initialization method. This learning method is utilized to train a new Takagi-Sugeno-Kang (TSK) Fuzzy Neural Network with correlated fuzzy rules including many parameters in both premise and consequent parts, avoiding getting stuck in a local optimum due to vanishing gradient. To learn the proposed network parameters, first, a constrained optimization problem is introduced and solved to estimate the desired values of premise parts' output values. Next, the error between these values and the current ones is utilized to adapt the premise parts' parameters based on the gradient-descent (GD) approach. Afterward, the error between the desired and network's outputs is used to learn consequent parts' parameters by the GD method. The proposed paradigm is successfully applied to real-world time-series prediction and regression problems. According to experimental results, its performance outperforms other methods with a more parsimonious structure.


Deep Neural Networks and Neuro-Fuzzy Networks for Intellectual Analysis of Economic Systems

arXiv.org Artificial Intelligence

In tis paper we consider approaches for time series forecasting based on deep neural networks and neuro-fuzzy nets. Also, we make short review of researches in forecasting based on various models of ANFIS models. Deep Learning has proven to be an effective method for making highly accurate predictions from complex data sources. Also, we propose our models of DL and Neuro-Fuzzy Networks for this task. Finally, we show possibility of using these models for data science tasks. This paper presents also an overview of approaches for incorporating rule-based methodology into deep learning neural networks.


Exponentially Weighted l_2 Regularization Strategy in Constructing Reinforced Second-order Fuzzy Rule-based Model

arXiv.org Machine Learning

In the conventional Takagi-Sugeno-Kang (TSK)-type fuzzy models, constant or linear functions are usually utilized as the consequent parts of the fuzzy rules, but they cannot effectively describe the behavior within local regions defined by the antecedent parts. In this article, a theoretical and practical design methodology is developed to address this problem. First, the information granulation (Fuzzy C-Means) method is applied to capture the structure in the data and split the input space into subspaces, as well as form the antecedent parts. Second, the quadratic polynomials (QPs) are employed as the consequent parts. Compared with constant and linear functions, QPs can describe the input-output behavior within the local regions (subspaces) by refining the relationship between input and output variables. However, although QP can improve the approximation ability of the model, it could lead to the deterioration of the prediction ability of the model (e.g., overfitting). To handle this issue, we introduce an exponential weight approach inspired by the weight function theory encountered in harmonic analysis. More specifically, we adopt the exponential functions as the targeted penalty terms, which are equipped with l2 regularization (l2) (i.e., exponential weighted l2, ewl_2) to match the proposed reinforced second-order fuzzy rule-based model (RSFRM) properly. The advantage of el 2 compared to ordinary l2 lies in separately identifying and penalizing different types of polynomial terms in the coefficient estimation, and its results not only alleviate the overfitting and prevent the deterioration of generalization ability but also effectively release the prediction potential of the model.


Heuristic design of fuzzy inference systems: A review of three decades of research

arXiv.org Artificial Intelligence

This paper provides an in-depth review of the optimal design of type-1 and type-2 fuzzy inference systems (FIS) using five well known computational frameworks: genetic-fuzzy systems (GFS), neuro-fuzzy systems (NFS), hierarchical fuzzy systems (HFS), evolving fuzzy systems (EFS), and multi-objective fuzzy systems (MFS), which is in view that some of them are linked to each other. The heuristic design of GFS uses evolutionary algorithms for optimizing both Mamdani-type and Takagi-Sugeno-Kang-type fuzzy systems. Whereas, the NFS combines the FIS with neural network learning systems to improve the approximation ability. An HFS combines two or more low-dimensional fuzzy logic units in a hierarchical design to overcome the curse of dimensionality. An EFS solves the data streaming issues by evolving the system incrementally, and an MFS solves the multi-objective trade-offs like the simultaneous maximization of both interpretability and accuracy. This paper offers a synthesis of these dimensions and explores their potentials, challenges, and opportunities in FIS research. This review also examines the complex relations among these dimensions and the possibilities of combining one or more computational frameworks adding another dimension: deep fuzzy systems.