Goto

Collaborating Authors

Fuzzy Logic


Demand Prediction Using Machine Learning Methods and Stacked Generalization

arXiv.org Artificial Intelligence

Supply and demand are two fundamental concepts of sellers and customers. Predicting demand accurately is critical for organizations in order to be able to make plans. In this paper, we propose a new approach for demand prediction on an e-commerce web site. The proposed model differs from earlier models in several ways. The business model used in the e-commerce web site, for which the model is implemented, includes many sellers that sell the same product at the same time at different prices where the company operates a market place model. The demand prediction for such a model should consider the price of the same product sold by competing sellers along the features of these sellers. In this study we first applied different regression algorithms for specific set of products of one department of a company that is one of the most popular online e-commerce companies in Turkey. Then we used stacked generalization or also known as stacking ensemble learning to predict demand. Finally, all the approaches are evaluated on a real world data set obtained from the e-commerce company. The experimental results show that some of the machine learning methods do produce almost as good results as the stacked generalization method.


Quantifying Uncertainty in Risk Assessment using Fuzzy Theory

arXiv.org Artificial Intelligence

Risk specialists are trying to understand risk better and use complex models for risk assessment, while many risks are not yet well understood. The lack of empirical data and complex causal and outcome relationships make it difficult to estimate the degree to which certain risk types are exposed. Traditional risk models are based on classical set theory. In comparison, fuzzy logic models are built on fuzzy set theory and are useful for analyzing risks with insufficient knowledge or inaccurate data. Fuzzy logic systems help to make large-scale risk management frameworks more simple. For risks that do not have an appropriate probability model, a fuzzy logic system can help model the cause and effect relationships, assess the level of risk exposure, rank key risks in a consistent way, and consider available data and experts'opinions. Besides, in fuzzy logic systems, some rules explicitly explain the connection, dependence, and relationships between model factors. This can help identify risk mitigation solutions. Resources can be used to mitigate risks with very high levels of exposure and relatively low hedging costs. Fuzzy set and fuzzy logic models can be used with Bayesian and other types of method recognition and decision models, including artificial neural networks and decision tree models. These developed models have the potential to solve difficult risk assessment problems. This research paper explores areas in which fuzzy logic models can be used to improve risk assessment and risk decision making. We will discuss the methodology, framework, and process of using fuzzy logic systems in risk assessment.


Information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations

arXiv.org Artificial Intelligence

Many real-world applications involve black-box optimization of multiple objectives using continuous function approximations that trade-off accuracy and resource cost of evaluation. For example, in rocket launching research, we need to find designs that trade-off return-time and angular distance using continuous-fidelity simulators (e.g., varying tolerance parameter to trade-off simulation time and accuracy) for design evaluations. The goal is to approximate the optimal Pareto set by minimizing the cost for evaluations. In this paper, we propose a novel approach referred to as information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations (iMOCA)} to solve this problem. The key idea is to select the sequence of input and function approximations for multiple objectives which maximize the information gain per unit cost for the optimal Pareto front. Our experiments on diverse synthetic and real-world benchmarks show that iMOCA significantly improves over existing single-fidelity methods.


A Review of Visual Descriptors and Classification Techniques Used in Leaf Species Identification

arXiv.org Artificial Intelligence

Plants are fundamentally important to life. Key research areas in plant science include plant species identification, weed classification using hyper spectral images, monitoring plant health and tracing leaf growth, and the semantic interpretation of leaf information. Botanists easily identify plant species by discriminating between the shape of the leaf, tip, base, leaf margin and leaf vein, as well as the texture of the leaf and the arrangement of leaflets of compound leaves. Because of the increasing demand for experts and calls for biodiversity, there is a need for intelligent systems that recognize and characterize leaves so as to scrutinize a particular species, the diseases that affect them, the pattern of leaf growth, and so on. We review several image processing methods in the feature extraction of leaves, given that feature extraction is a crucial technique in computer vision. As computers cannot comprehend images, they are required to be converted into features by individually analysing image shapes, colours, textures and moments. Images that look the same may deviate in terms of geometric and photometric variations. In our study, we also discuss certain machine learning classifiers for an analysis of different species of leaves.


Policy Gradient Reinforcement Learning for Policy Represented by Fuzzy Rules: Application to Simulations of Speed Control of an Automobile

arXiv.org Artificial Intelligence

A method of a fusion of fuzzy inference and policy gradient reinforcement learning has been proposed that directly learns, as maximizes the expected value of the reward per episode, parameters in a policy function represented by fuzzy rules with weights. A study has applied this method to a task of speed control of an automobile and has obtained correct policies, some of which control speed of the automobile appropriately but many others generate inappropriate vibration of speed. In general, the policy is not desirable that causes sudden time change or vibration in the output value, and there would be many cases where the policy giving smooth time change in the output value is desirable. In this paper, we propose a fusion method using the objective function, that introduces defuzzification with the center of gravity model weighted stochastically and a constraint term for smoothness of time change, as an improvement measure in order to suppress sudden change of the output value of the fuzzy controller. Then we show the learning rule in the fusion, and also consider the effect by reward functions on the fluctuation of the output value. As experimental results of an application of our method on speed control of an automobile, it was confirmed that the proposed method has the effect of suppressing the undesirable fluctuation in time-series of the output value. Moreover, it was also showed that the difference between reward functions might adversely affect the results of learning.


A Bayesian Approach with Type-2 Student-tMembership Function for T-S Model Identification

arXiv.org Artificial Intelligence

Clustering techniques have been proved highly suc-cessful for Takagi-Sugeno (T-S) fuzzy model identification. Inparticular, fuzzyc-regression clustering based on type-2 fuzzyset has been shown the remarkable results on non-sparse databut their performance degraded on sparse data. In this paper, aninnovative architecture for fuzzyc-regression model is presentedand a novel student-tdistribution based membership functionis designed for sparse data modelling. To avoid the overfitting,we have adopted a Bayesian approach for incorporating aGaussian prior on the regression coefficients. Additional noveltyof our approach lies in type-reduction where the final output iscomputed using Karnik Mendel algorithm and the consequentparameters of the model are optimized using Stochastic GradientDescent method. As detailed experimentation, the result showsthat proposed approach outperforms on standard datasets incomparison of various state-of-the-art methods.


Artificial Intelligence Review

#artificialintelligence

On the evaluation and combination of state-of-the-art features in Twitter sentiment analysis Authors Content type: OriginalPaper Published: 27 August 2020 Nature inspired optimization algorithms or simply variations of metaheuristics? Nature inspired optimization algorithms or simply variations of metaheuristics? Nature inspired optimization algorithms or simply variations of metaheuristics? Electric Charged Particles Optimization and its application to the optimal design of a circular antenna array Authors H. R. E. H. Bouchekara Content type: OriginalPaper Published: 20 August 2020 CHIRPS: Explaining random forest classification Authors Mohamed Medhat Gaber R. Muhammad Atif Azad Content type: OriginalPaper Published: 04 June 2020 Image classifiers and image deep learning classifiers evolved in detection of Oryza sativa diseases: survey Authors N. V. Raja Reddy Goluguri Content type: EditorialNotes Published: 28 May 2020 Novel classes of coverings based multigranulation fuzzy rough sets and corresponding applications to multiple attribute group decision-making Authors (first, second and last of 4) José Carlos R. Alcantud Content type: OriginalPaper Published: 19 May 2020


Handling of uncertainty in medical data using machine learning and probability theory techniques: A review of 30 years (1991-2020)

arXiv.org Artificial Intelligence

Understanding data and reaching valid conclusions are of paramount importance in the present era of big data. Machine learning and probability theory methods have widespread application for this purpose in different fields. One critically important yet less explored aspect is how data and model uncertainties are captured and analyzed. Proper quantification of uncertainty provides valuable information for optimal decision making. This paper reviewed related studies conducted in the last 30 years (from 1991 to 2020) in handling uncertainties in medical data using probability theory and machine learning techniques. Medical data is more prone to uncertainty due to the presence of noise in the data. So, it is very important to have clean medical data without any noise to get accurate diagnosis. The sources of noise in the medical data need to be known to address this issue. Based on the medical data obtained by the physician, diagnosis of disease, and treatment plan are prescribed. Hence, the uncertainty is growing in healthcare and there is limited knowledge to address these problems. We have little knowledge about the optimal treatment methods as there are many sources of uncertainty in medical science. Our findings indicate that there are few challenges to be addressed in handling the uncertainty in medical raw data and new models. In this work, we have summarized various methods employed to overcome this problem. Nowadays, application of novel deep learning techniques to deal such uncertainties have significantly increased.


Distributed Linguistic Representations in Decision Making: Taxonomy, Key Elements and Applications, and Challenges in Data Science and Explainable Artificial Intelligence

arXiv.org Artificial Intelligence

Distributed linguistic representations are powerful tools for modelling the uncertainty and complexity of preference information in linguistic decision making. To provide a comprehensive perspective on the development of distributed linguistic representations in decision making, we present the taxonomy of existing distributed linguistic representations. Then, we review the key elements of distributed linguistic information processing in decision making, including the distance measurement, aggregation methods, distributed linguistic preference relations, and distributed linguistic multiple attribute decision making models. Next, we provide a discussion on ongoing challenges and future research directions from the perspective of data science and explainable artificial intelligence.


Fuzzy OWL-BOOST: Learning Fuzzy Concept Inclusions via Real-Valued Boosting

arXiv.org Artificial Intelligence

OWL ontologies are nowadays a quite popular way to describe structured knowledge in terms of classes, relations among classes and class instances. In this paper, given a target class T of an OWL ontology, we address the problem of learning fuzzy concept inclusion axioms that describe sufficient conditions for being an individual instance of T. To do so, we present Fuzzy OWL-BOOST that relies on the Real AdaBoost boosting algorithm adapted to the (fuzzy) OWL case. We illustrate its effectiveness by means of an experimentation. An interesting feature is that the learned rules can be represented directly into Fuzzy OWL 2. As a consequence, any Fuzzy OWL 2 reasoner can then be used to automatically determine/classify (and to which degree) whether an individual belongs to the target class T.