Goto

Collaborating Authors

Results


A Brief Introduction to Fundamentals of Machine Learning

#artificialintelligence

Data adventure, which started with data mining concept, has been in a continuous development with introducing different algorithms. There are many applicable algorithms in AI. Besides, AI is actively used in marketing, health, agriculture, space, and autonomous vehicle production for now. Data mining is divided into different models according to fields in which it is used. These models can be grouped under four main headings as a value estimation model, database clustering model, link analysis, and difference deviations.


5 Main Types of Machine Learning Systems

#artificialintelligence

Supervised learning is the common most type of machine learning. Most ML problems that we encounter fall into this category. As the name implies, a supervised learning algorithm is trained with input data along with some form of guidance that we can call labels. Labels are also known as targets and they act as a description of the input data. With that said, there are other advanced tasks that don't directly fall into supervised learning, but they actually are.


Game of GANs: Game Theoretical Models for Generative Adversarial Networks

arXiv.org Artificial Intelligence

Generative Adversarial Network, as a promising research direction in the AI community, recently attracts considerable attention due to its ability to generating high-quality realistic data. GANs are a competing game between two neural networks trained in an adversarial manner to reach a Nash equilibrium. Despite the improvement accomplished in GANs in the last years, there remain several issues to solve. In this way, how to tackle these issues and make advances leads to rising research interests. This paper reviews literature that leverages the game theory in GANs and addresses how game models can relieve specific generative models' challenges and improve the GAN's performance. In particular, we firstly review some preliminaries, including the basic GAN model and some game theory backgrounds. After that, we present our taxonomy to summarize the state-of-the-art solutions into three significant categories: modified game model, modified architecture, and modified learning method. The classification is based on the modifications made in the basic model by the proposed approaches from the game-theoretic perspective. We further classify each category into several subcategories. Following the proposed taxonomy, we explore the main objective of each class and review the recent work in each group. Finally, we discuss the remaining challenges in this field and present the potential future research topics.


Illuminating Mario Scenes in the Latent Space of a Generative Adversarial Network

arXiv.org Artificial Intelligence

Recent developments in machine learning techniques have allowed automatic generation of video game levels that are stylistically similar to human-designed examples. While the output of machine learning models such as generative adversarial networks (GANs) is notoriously hard to control, the recently proposed latent variable evolution (LVE) technique searches the space of GAN parameters to generate outputs that optimize some objective performance metric, such as level playability. However, the question remains on how to automatically generate a diverse range of high-quality solutions based on a prespecified set of desired characteristics. We introduce a new method called latent space illumination (LSI), which uses state-of-the-art quality diversity algorithms designed to optimize in continuous spaces, i.e., MAP-Elites with a directional variation operator and Covariance Matrix Adaptation MAP-Elites, to effectively search the parameter space of theGAN along a set of multiple level mechanics. We show the performance of LSI algorithms in three experiments in SuperMario Bros., a benchmark domain for procedural content generation. Results suggest that LSI generates sets of Mario levels that are reliably mechanically diverse as well as playable.


Unsupervisedly Learned Representations: Should the Quest be Over?

arXiv.org Artificial Intelligence

There exists a Classification accuracy gap of about 20% between our best methods of generating Unsupervisedly Learned Representations and the accuracy rates achieved by (naturally Unsupervisedly Learning) humans. We are at our fourth decade at least in search of this class of paradigms. It thus may well be that we are looking in the wrong direction. We present in this paper a possible solution to this puzzle. We demonstrate that Reinforcement Learning schemes can learn representations, which may be used for Pattern Recognition tasks such as Classification, achieving practically the same accuracy as that of humans. Our main modest contribution lies in the observations that: a. when applied to a real world environment (e.g. nature itself) Reinforcement Learning does not require labels, and thus may be considered a natural candidate for the long sought, accuracy competitive Unsupervised Learning method, and b. in contrast, when Reinforcement Learning is applied in a simulated or symbolic processing environment (e.g. a computer program) it does inherently require labels and should thus be generally classified, with some exceptions, as Supervised Learning. The corollary of these observations is that further search for Unsupervised Learning competitive paradigms which may be trained in simulated environments like many of those found in research and applications may be futile.


A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications

arXiv.org Machine Learning

Generative adversarial networks (GANs) are a hot research topic recently. GANs have been widely studied since 2014, and a large number of algorithms have been proposed. However, there is few comprehensive study explaining the connections among different GANs variants, and how they have evolved. In this paper, we attempt to provide a review on various GANs methods from the perspectives of algorithms, theory, and applications. Firstly, the motivations, mathematical representations, and structure of most GANs algorithms are introduced in details. Furthermore, GANs have been combined with other machine learning algorithms for specific applications, such as semi-supervised learning, transfer learning, and reinforcement learning. This paper compares the commonalities and differences of these GANs methods. Secondly, theoretical issues related to GANs are investigated. Thirdly, typical applications of GANs in image processing and computer vision, natural language processing, music, speech and audio, medical field, and data science are illustrated. Finally, the future open research problems for GANs are pointed out.


Generative Adversarial Network Rooms in Generative Graph Grammar Dungeons for The Legend of Zelda

arXiv.org Artificial Intelligence

-- Generative Adversarial Networks (GANs) have demonstrated their ability to learn patterns in data and produce new exemplars similar to, but different from, their training set in several domains, including video games. However, GANs have a fixed output size, so creating levels of arbitrary size for a dungeon crawling game is difficult. GANs also have trouble encoding semantic requirements that make levels interesting and playable. This paper combines a GAN approach to generating individual rooms with a graph grammar approach to combining rooms into a dungeon. The GAN captures design principles of individual rooms, but the graph grammar organizes rooms into a global layout with a sequence of obstacles determined by a designer . Room data from The Legend of Zelda is used to train the GAN. This approach is validated by a user study, showing that GAN dungeons are as enjoyable to play as a level from the original game, and levels generated with a graph grammar alone. However, GAN dungeons have rooms considered more complex, and plain graph grammar's dungeons are considered least complex and challenging. Only the GAN approach creates an extensive supply of both layouts and rooms, where rooms span across the spectrum of those seen in the training set to new creations merging design principles from multiple rooms. Video game developers increase replayability and reduce costs using Procedural Content Generation (PCG [1]). Instead of experiencing the game once, players see new variations on every playthrough. This concept was introduced in Rouge (1980), which procedurally generates new dungeons on every play. PCG is also applied to modern games like Minecraft (2009), where users play on generated landscapes, and No Man's Sky (2016), where procedurally generated worlds contain procedurally generated animals. PCG encourages increased exploration and increases replayability. An emerging PCG technique is Generative Adversarial Networks (GANs [2]) used to search the latent design space of video game levels, as has been done in Super Mario Bros. [3], Doom [4], an educational game [5], and the General Video Game AI (GVG-AI [6]) adaptation of The Legend of Zelda [7].


NVIDIA Blog: Supervised Vs. Unsupervised Learning

#artificialintelligence

There are a few different ways to build IKEA furniture. Each will, ideally, lead to a completed couch or chair. But depending on the details, one approach will make more sense than the others. Getting the hang of it? Toss the manual aside and go solo.


Machine Learning Algorithms: 4 Types You Should Know

#artificialintelligence

Machine Learning came a long way from a science fiction fancy to a reliable and diverse business tool that amplifies multiple elements of the business operation. Its influence on business performance may be so significant that the implementation of machine learning algorithms is required to maintain competitiveness in many fields and industries. The implementation of machine learning into business operations is a strategic step and requires a lot of resources. Therefore, it's important to understand what do you want the ML to do for your particular business and what kind of perks different types of ML algorithms bring to the table. In this article, we'll cover the major types of machine learning algorithms, explain the purpose of each of them, and see what the benefits are.


GANGs: Generative Adversarial Network Games

arXiv.org Machine Learning

Generative Adversarial Networks (GAN) have become one of the most successful frameworks for unsupervised generative modeling. As GANs are difficult to train much research has focused on this. However, very little of this research has directly exploited game-theoretic techniques. We introduce Generative Adversarial Network Games (GANGs), which explicitly model a finite zero-sum game between a generator ($G$) and classifier ($C$) that use mixed strategies. The size of these games precludes exact solution methods, therefore we define resource-bounded best responses (RBBRs), and a resource-bounded Nash Equilibrium (RB-NE) as a pair of mixed strategies such that neither $G$ or $C$ can find a better RBBR. The RB-NE solution concept is richer than the notion of `local Nash equilibria' in that it captures not only failures of escaping local optima of gradient descent, but applies to any approximate best response computations, including methods with random restarts. To validate our approach, we solve GANGs with the Parallel Nash Memory algorithm, which provably monotonically converges to an RB-NE. We compare our results to standard GAN setups, and demonstrate that our method deals well with typical GAN problems such as mode collapse, partial mode coverage and forgetting.