Goto

Collaborating Authors

Fuzzy Logic


Lotfi Zadeh Word Search Puzzle - Fuzzy Logic Artificial Intelligence - Pioneers

#artificialintelligence

The story behind this product: Lotfi Aliasker Zadeh (February 4, 1921 – September 6, 2017) was a mathematician, computer scientist, electrical engineer, artificial intelligence researcher and professor emeritus of computer science at the University of California, Berkeley. Zadeh was best known for proposing fuzzy mathematics consisting of these fuzzy-related concepts: fuzzy sets, fuzzy logic, fuzzy algorithms, fuzzy semantics, fuzzy languages, fuzzy control, fuzzy systems, fuzzy probabilities, fuzzy events, and fuzzy information. On November 30, 2021, Google celebrated the submission of "Fuzzy Sets," a groundbreaking paper that introduced the world to his innovative mathematical framework called "fuzzy logic with a Google Doodle. This file contains 1 page of Lotfi Zadeh Word Search Puzzle with 30 Lotfi Zadeh themed Words and 1 page with its solution. The 30 words are hidden in all directions, making the word search challenging.


Lotfi Zadeh: Google doodle honors Azerbaijani-American computer scientist

USATODAY - Tech Top Stories

Google is paying tribute Tuesday to the computer scientist who created the mathematical framework "fuzzy logic." On this day in 1964, Zadeh submitted the paper "Fuzzy Sets," which laid out the concept of "fuzzy logic." The logo featured on Google.com "The theory he presented offered an alternative to the rigid'black and white' parameters of traditional logic and instead allowed for more ambiguous or'fuzzy' boundaries that more closely mimic the way humans see the world," reads a biography of Zadeh by Google. The theory has been used in various tech applications, including anti-skid algorithms for cars.


Application of Fuzzy Set Theory to Setup Planning

#artificialintelligence

Computer-aided process planning and computer-aided fixture planning have been widely researched in the last two decades. Most of these computer-aided systems are, however, either dealing only with process planning or fixture design. A set-up planning system for the machining of prismatic parts on a 3-axis vertical machining centre is proposed. This system formulates set-up plans based on the initial, intermediate and final states of a part. The system uses the fuzzy set representation, along with production rules and object representation.


Driving Style Recognition Using Interval Type-2 Fuzzy Inference System and Multiple Experts Decision Making

arXiv.org Artificial Intelligence

Driving styles summarize different driving behaviors that reflect in the movements of the vehicles. These behaviors may indicate a tendency to perform riskier maneuvers, consume more fuel or energy, break traffic rules, or drive carefully. Therefore, this paper presents a driving style recognition using Interval Type-2 Fuzzy Inference System with Multiple Experts Decision-Making for classifying drivers into calm, moderate and aggressive. This system receives as input features longitudinal and lateral kinematic parameters of the vehicle motion. The type-2 fuzzy sets are more robust than type-1 fuzzy sets when handling noisy data, because their membership function are also fuzzy sets. In addition, a multiple experts approach can reduce the bias and imprecision while building the fuzzy rulebase, which stores the knowledge of the fuzzy system. The proposed approach was evaluated using descriptive statistics analysis, and compared with clustering algorithms and a type-1 fuzzy inference system. The results show the tendency to associate lower kinematic profiles for the driving styles classified with the type-2 fuzzy inference system when compared to other algorithms, which is in line with the more conservative approach adopted in the aggregation of the experts' opinions.


Learning Stochastic Shortest Path with Linear Function Approximation

arXiv.org Machine Learning

The Stochastic Shortest Path (SSP) model refers to a type of reinforcement learning (RL) problems where an agent repeatedly interacts with a stochastic environment and aims to reach some specific goal state while minimizing the cumulative cost. Compared with other popular RL settings such as episodic and infinite-horizon Markov Decision Processes (MDPs), the horizon length in SSP is random, varies across different policies, and can potentially be infinite because the interaction only stops when arriving at the goal state. Therefore, the SSP model includes both episodic and infinitehorizon MDPs as special cases, and is comparably more general and of broader applicability. In particular, many goal-oriented real-world problems fit better into the SSP model, such as navigation and GO game (Andrychowicz et al., 2017; Nasiriany et al., 2019). In recent years, there emerges a line of works on developing efficient algorithms and the corresponding analyses for learning SSP. Most of them consider the episodic setting, where the interaction between the agent and the environment proceeds in K episodes (Cohen et al., 2020; Tarbouriech et al., 2020a). For tabular SSP models where the sizes of the action and state space are finite, Cohen et al. (2021) developed a finite-horizon reduction algorithm that achieves the minimax


Mechanistic Interpretation of Machine Learning Inference: A Fuzzy Feature Importance Fusion Approach

arXiv.org Artificial Intelligence

With the widespread use of machine learning to support decision-making, it is increasingly important to verify and understand the reasons why a particular output is produced. Although post-training feature importance approaches assist this interpretation, there is an overall lack of consensus regarding how feature importance should be quantified, making explanations of model predictions unreliable. In addition, many of these explanations depend on the specific machine learning approach employed and on the subset of data used when calculating feature importance. A possible solution to improve the reliability of explanations is to combine results from multiple feature importance quantifiers from different machine learning approaches coupled with re-sampling. Current state-of-the-art ensemble feature importance fusion uses crisp techniques to fuse results from different approaches. There is, however, significant loss of information as these approaches are not context-aware and reduce several quantifiers to a single crisp output. More importantly, their representation of 'importance' as coefficients is misleading and incomprehensible to end-users and decision makers. Here we show how the use of fuzzy data fusion methods can overcome some of the important limitations of crisp fusion methods.


On Reward-Free RL with Kernel and Neural Function Approximations: Single-Agent MDP and Markov Game

arXiv.org Machine Learning

To achieve sample efficiency in reinforcement learning (RL), it necessitates efficiently exploring the underlying environment. Under the offline setting, addressing the exploration challenge lies in collecting an offline dataset with sufficient coverage. Motivated by such a challenge, we study the reward-free RL problem, where an agent aims to thoroughly explore the environment without any pre-specified reward function. Then, given any extrinsic reward, the agent computes the policy via a planning algorithm with offline data collected in the exploration phase. Moreover, we tackle this problem under the context of function approximation, leveraging powerful function approximators. Specifically, we propose to explore via an optimistic variant of the value-iteration algorithm incorporating kernel and neural function approximations, where we adopt the associated exploration bonus as the exploration reward. Moreover, we design exploration and planning algorithms for both single-agent MDPs and zero-sum Markov games and prove that our methods can achieve $\widetilde{\mathcal{O}}(1 /\varepsilon^2)$ sample complexity for generating a $\varepsilon$-suboptimal policy or $\varepsilon$-approximate Nash equilibrium when given an arbitrary extrinsic reward. To the best of our knowledge, we establish the first provably efficient reward-free RL algorithm with kernel and neural function approximators.


Do We Need Fuzzy Substrates?

#artificialintelligence

Computers are embedded in almost all of our devices, and most of them are digital. Information at the low levels is stored as binary. Biology, in contrast, often makes use of analog systems. Take fuzzy logic for example. Fuzzy logic techniques typically involve the concept of intermediate values between true and false. But you don't need a special computer for fuzzy logic -- it's just a program running on the digital computer like any other program.


Output Space Entropy Search Framework for Multi-Objective Bayesian Optimization

arXiv.org Machine Learning

We consider the problem of black-box multi-objective optimization (MOO) using expensive function evaluations (also referred to as experiments), where the goal is to approximate the true Pareto set of solutions by minimizing the total resource cost of experiments. For example, in hardware design optimization, we need to find the designs that trade-off performance, energy, and area overhead using expensive computational simulations. The key challenge is to select the sequence of experiments to uncover high-quality solutions using minimal resources. In this paper, we propose a general framework for solving MOO problems based on the principle of output space entropy (OSE) search: select the experiment that maximizes the information gained per unit resource cost about the true Pareto front. We appropriately instantiate the principle of OSE search to derive efficient algorithms for the following four MOO problem settings: 1) The most basic em single-fidelity setting, where experiments are expensive and accurate; 2) Handling em black-box constraints} which cannot be evaluated without performing experiments; 3) The discrete multi-fidelity setting, where experiments can vary in the amount of resources consumed and their evaluation accuracy; and 4) The em continuous-fidelity setting, where continuous function approximations result in a huge space of experiments. Experiments on diverse synthetic and real-world benchmarks show that our OSE search based algorithms improve over state-of-the-art methods in terms of both computational-efficiency and accuracy of MOO solutions.


The Sigma-Max System Induced from Randomness and Fuzziness

arXiv.org Artificial Intelligence

This paper managed to induce probability theory (sigma system) and possibility theory (max system) respectively from randomness and fuzziness, through which the premature theory of possibility is expected to be well founded. Such an objective is achieved by addressing three open key issues: a) the lack of clear mathematical definitions of randomness and fuzziness; b) the lack of intuitive mathematical definition of possibility; c) the lack of abstraction procedure of the axiomatic definitions of probability/possibility from their intuitive definitions. Especially, the last issue involves the question why the key axiom of "maxitivity" is adopted for possibility measure. By taking advantage of properties of the well-defined randomness and fuzziness, we derived the important conclusion that "max" is the only but un-strict disjunctive operator that is applicable across the fuzzy event space, and is an exact operator for fuzzy feature extraction that assures the max inference is an exact mechanism. It is fair to claim that the long-standing problem of lack of consensus to the foundation of possibility theory is well resolved, which would facilitate wider adoption of possibility theory in practice and promote cross prosperity of the two uncertainty theories of probability and possibility. Randomness and fuzziness are well recognized as two kinds of fundamental uncertainties of this world. It remains as an open topic on how to correctly comprehend these uncertainties and effectively handle them in practice. For modeling of random uncertainty, probability theory and the derivative subjects of statistics and stochastic process are no doubt the classic tool set. Probability theory, which satisfies the key axiom of "additivity" [18,23], has grown up to be mature, upon which nearly the whole building of information sciences is based and applications of which could be found over a great diversity of communities [22, 29,41,42,52,53].