Goto

Collaborating Authors

Results


An Efficient and Accurate Rough Set for Feature Selection, Classification and Knowledge Representation

arXiv.org Artificial Intelligence

This paper present a strong data mining method based on rough set, which can realize feature selection, classification and knowledge representation at the same time. Rough set has good interpretability, and is a popular method for feature selections. But low efficiency and low accuracy are its main drawbacks that limits its application ability. In this paper,corresponding to the accuracy, we first find the ineffectiveness of rough set because of overfitting, especially in processing noise attribute, and propose a robust measurement for an attribute, called relative importance.we proposed the concept of "rough concept tree" for knowledge representation and classification. Experimental results on public benchmark data sets show that the proposed framework achieves higher accurcy than seven popular or the state-of-the-art feature selection methods.


GBRS: An Unified Model of Pawlak Rough Set and Neighborhood Rough Set

arXiv.org Artificial Intelligence

Pawlak rough set and neighborhood rough set are the two most common rough set theoretical models. Pawlawk can use equivalence classes to represent knowledge, but it cannot process continuous data; neighborhood rough sets can process continuous data, but it loses the ability of using equivalence classes to represent knowledge. To this end, this paper presents a granular-ball rough set based on the granlar-ball computing. The granular-ball rough set can simultaneously represent Pawlak rough sets, and the neighborhood rough set, so as to realize the unified representation of the two. This makes the granular-ball rough set not only can deal with continuous data, but also can use equivalence classes for knowledge representation. In addition, we propose an implementation algorithms of granular-ball rough sets. The experimental resuts on benchmark datasets demonstrate that, due to the combination of the robustness and adaptability of the granular-ball computing, the learning accuracy of the granular-ball rough set has been greatly improved compared with the Pawlak rough set and the traditional neighborhood rough set. The granular-ball rough set also outperforms nine popular or the state-of-the-art feature selection methods.


Fuzzy Win-Win: A Novel Approach to Quantify Win-Win Using Fuzzy Logic

arXiv.org Artificial Intelligence

The classic win-win has a key flaw in that it cannot offer the parties the right amounts of winning because each party believes they are winners. In reality, one party may win more than the other. This strategy is not limited to a single product or negotiation; it may be applied to a variety of situations in life. We present a novel way to measure the win-win situation in this paper. The proposed method employs Fuzzy logic to create a mathematical model that aids negotiators in quantifying their winning percentages. The model is put to the test on real-life negotiations scenarios such as the Iranian uranium enrichment negotiations, the Iraqi-Jordanian oil deal, and the iron ore negotiation (2005-2009). The presented model has shown to be a useful tool in practice and can be easily generalized to be utilized in other domains as well.


Lotfi Zadeh Word Search Puzzle - Fuzzy Logic Artificial Intelligence - Pioneers

#artificialintelligence

The story behind this product: Lotfi Aliasker Zadeh (February 4, 1921 – September 6, 2017) was a mathematician, computer scientist, electrical engineer, artificial intelligence researcher and professor emeritus of computer science at the University of California, Berkeley. Zadeh was best known for proposing fuzzy mathematics consisting of these fuzzy-related concepts: fuzzy sets, fuzzy logic, fuzzy algorithms, fuzzy semantics, fuzzy languages, fuzzy control, fuzzy systems, fuzzy probabilities, fuzzy events, and fuzzy information. On November 30, 2021, Google celebrated the submission of "Fuzzy Sets," a groundbreaking paper that introduced the world to his innovative mathematical framework called "fuzzy logic with a Google Doodle. This file contains 1 page of Lotfi Zadeh Word Search Puzzle with 30 Lotfi Zadeh themed Words and 1 page with its solution. The 30 words are hidden in all directions, making the word search challenging.


Lotfi Zadeh: Google doodle honors Azerbaijani-American computer scientist

USATODAY - Tech Top Stories

Google is paying tribute Tuesday to the computer scientist who created the mathematical framework "fuzzy logic." On this day in 1964, Zadeh submitted the paper "Fuzzy Sets," which laid out the concept of "fuzzy logic." The logo featured on Google.com "The theory he presented offered an alternative to the rigid'black and white' parameters of traditional logic and instead allowed for more ambiguous or'fuzzy' boundaries that more closely mimic the way humans see the world," reads a biography of Zadeh by Google. The theory has been used in various tech applications, including anti-skid algorithms for cars.


Learning Stochastic Shortest Path with Linear Function Approximation

arXiv.org Machine Learning

The Stochastic Shortest Path (SSP) model refers to a type of reinforcement learning (RL) problems where an agent repeatedly interacts with a stochastic environment and aims to reach some specific goal state while minimizing the cumulative cost. Compared with other popular RL settings such as episodic and infinite-horizon Markov Decision Processes (MDPs), the horizon length in SSP is random, varies across different policies, and can potentially be infinite because the interaction only stops when arriving at the goal state. Therefore, the SSP model includes both episodic and infinitehorizon MDPs as special cases, and is comparably more general and of broader applicability. In particular, many goal-oriented real-world problems fit better into the SSP model, such as navigation and GO game (Andrychowicz et al., 2017; Nasiriany et al., 2019). In recent years, there emerges a line of works on developing efficient algorithms and the corresponding analyses for learning SSP. Most of them consider the episodic setting, where the interaction between the agent and the environment proceeds in K episodes (Cohen et al., 2020; Tarbouriech et al., 2020a). For tabular SSP models where the sizes of the action and state space are finite, Cohen et al. (2021) developed a finite-horizon reduction algorithm that achieves the minimax


Do We Need Fuzzy Substrates?

#artificialintelligence

Computers are embedded in almost all of our devices, and most of them are digital. Information at the low levels is stored as binary. Biology, in contrast, often makes use of analog systems. Take fuzzy logic for example. Fuzzy logic techniques typically involve the concept of intermediate values between true and false. But you don't need a special computer for fuzzy logic -- it's just a program running on the digital computer like any other program.


Reward-Free Model-Based Reinforcement Learning with Linear Function Approximation

arXiv.org Machine Learning

We study the model-based reward-free reinforcement learning with linear function approximation for episodic Markov decision processes (MDPs). In this setting, the agent works in two phases. In the exploration phase, the agent interacts with the environment and collects samples without the reward. In the planning phase, the agent is given a specific reward function and uses samples collected from the exploration phase to learn a good policy. We propose a new provably efficient algorithm, called UCRL-RFE under the Linear Mixture MDP assumption, where the transition probability kernel of the MDP can be parameterized by a linear function over certain feature mappings defined on the triplet of state, action, and next state. We show that to obtain an $\epsilon$-optimal policy for arbitrary reward function, UCRL-RFE needs to sample at most $\tilde O(H^5d^2\epsilon^{-2})$ episodes during the exploration phase. Here, $H$ is the length of the episode, $d$ is the dimension of the feature mapping. We also propose a variant of UCRL-RFE using Bernstein-type bonus and show that it needs to sample at most $\tilde O(H^4d(H + d)\epsilon^{-2})$ to achieve an $\epsilon$-optimal policy. By constructing a special class of linear Mixture MDPs, we also prove that for any reward-free algorithm, it needs to sample at least $\tilde \Omega(H^2d\epsilon^{-2})$ episodes to obtain an $\epsilon$-optimal policy. Our upper bound matches the lower bound in terms of the dependence on $\epsilon$ and the dependence on $d$ if $H \ge d$.


The application of artificial intelligence in software engineering: a review challenging conventional wisdom

arXiv.org Artificial Intelligence

The field of artificial intelligence (AI) is witnessing a recent upsurge in research, tools development, and deployment of applications. Multiple software companies are shifting their focus to developing intelligent systems; and many others are deploying AI paradigms to their existing processes. In parallel, the academic research community is injecting AI paradigms to provide solutions to traditional engineering problems. Similarly, AI has evidently been proved useful to software engineering (SE). When one observes the SE phases (requirements, design, development, testing, release, and maintenance), it becomes clear that multiple AI paradigms (such as neural networks, machine learning, knowledge-based systems, natural language processing) could be applied to improve the process and eliminate many of the major challenges that the SE field has been facing. This survey chapter is a review of the most commonplace methods of AI applied to SE. The review covers methods between years 1975-2017, for the requirements phase, 46 major AI-driven methods are found, 19 for design, 15 for development, 68 for testing, and 15 for release and maintenance. Furthermore, the purpose of this chapter is threefold; firstly, to answer the following questions: is there sufficient intelligence in the SE lifecycle? What does applying AI to SE entail? Secondly, to measure, formulize, and evaluate the overlap of SE phases and AI disciplines. Lastly, this chapter aims to provide serious questions to challenging the current conventional wisdom (i.e., status quo) of the state-of-the-art, craft a call for action, and to redefine the path forward.


Capturing the temporal constraints of gradual patterns

arXiv.org Artificial Intelligence

Gradual pattern mining allows for extraction of attribute correlations through gradual rules such as: "the more X, the more Y". Such correlations are useful in identifying and isolating relationships among the attributes that may not be obvious through quick scans on a data set. For instance, a researcher may apply gradual pattern mining to determine which attributes of a data set exhibit unfamiliar correlations in order to isolate them for deeper exploration or analysis. In this work, we propose an ant colony optimization technique which uses a popular probabilistic approach that mimics the behavior biological ants as they search for the shortest path to find food in order to solve combinatorial problems. In our second contribution, we extend an existing gradual pattern mining technique to allow for extraction of gradual patterns together with an approximated temporal lag between the affected gradual item sets. Such a pattern is referred to as a fuzzy-temporal gradual pattern and it may take the form: "the more X, the more Y, almost 3 months later". In our third contribution, we propose a data crossing model that allows for integration of mostly gradual pattern mining algorithm implementations into a Cloud platform. This contribution is motivated by the proliferation of IoT applications in almost every area of our society and this comes with provision of large-scale time-series data from different sources.