Collaborating Authors

Evolutionary Systems

Top S&P 500 Stocks Based on Genetic Algorithms: Returns up to 75.82% in 3 Months


This top S&P 500 stocks forecast is designed for investors and analysts who need predictions for the whole S&P 500. Package Name: Top S&P 500 Stocks Recommended Positions: Long Forecast Length: 3 Months (4/1/2020 – 7/1/2020) I Know First Average: 32.03% The greatest return came from ABMD at 75.82%. NVDA and ETFC also performed well for this time horizon with returns of 44.61% and 42.98%, respectively. The overall average return in this Top S&P 500 Stocks package was 32.03%, providing investors with a 11.47% premium over the S&P 500's return of 20.56% during the same period.

Stock Scanner Based on Genetic Algorithms: Returns up to 486.89% in 3 Months


This stock scanner is part of the Risk-Conscious Package, as one of I Know First's equity research solutions. We determine our aggressive stock picks by screening our algorithm daily for higher volatility stocks that present greater opportunities but are also riskier. Package Name: Aggressive Stocks Forecast Recommended Positions: Long Forecast Length: 3 Months (4/1/2020 – 7/1/2020) I Know First Average: 100.66% The highest trade return came from NVAX, at 486.89%. NLS and DPW followed with returns of 259.77% and 213.26% for the 3 Months period.

Future of AI Part 5: The Cutting Edge of AI


Edmond de Belamy is a Generative Adversarial Network portrait painting constructed in 2018 by Paris-based arts-collective Obvious and sold for $432,500 in Southebys in October 2018.

Helping Novices Avoid the Hazards of Data: Leveraging Ontologies to Improve Model Generalization Automatically with Online Data Sources

AI Magazine

The infrastructure and tools necessary for large-scale data analytics, formerly the exclusive purview of experts, are increasingly available. Whereas a knowledgeable data-miner or domain expert can rightly be expected to exercise caution when required (for example, around fallacious conclusions supposedly supported by the data), the nonexpert may benefit from some judicious assistance. This article describes an end-to-end learning framework that allows a novice to create models from data easily by helping structure the model building process and capturing extended aspects of domain knowledge. By treating the whole modeling process interactively and exploiting high-level knowledge in the form of an ontology, the framework is able to aid the user in a number of ways, including in helping to avoid pitfalls such as data dredging. Prudence must be exercised to avoid these hazards as certain conclusions may only be supported if, for example, there is extra knowledge which gives reason to trust a narrower set of hypotheses.

5 Top Genetic Algorithm Startups StartUs Insights Research Blog


Our Innovation Analysts recently looked into emerging technologies and up-and-coming startups working on artificial intelligence. As there are many startups working on various different applications, we want to share our insights with you. Here, we take a look at 5 promising genetic algorithm startups. For our 5 top picks, we used a data-driven startup scouting approach to identify the most relevant solutions globally. The Global Startup Heat Map below highlights 5 interesting examples out of 111 relevant solutions.

BotHive Home - Platform For Bitcoin Algo Trading


All Bots present on BotHIVE are obtained via Genetic Algorithms, that utilise the concepts of natural selection to determine the best optimisation for trading commonly known market patterns.

Sliding-Window Thompson Sampling for Non-Stationary Settings

Journal of Artificial Intelligence Research

Multi-Armed Bandit (MAB) techniques have been successfully applied to many classes of sequential decision problems in the past decades. However, non-stationary settings -- very common in real-world applications -- received little attention so far, and theoretical guarantees on the regret are known only for some frequentist algorithms. In this paper, we propose an algorithm, namely Sliding-Window Thompson Sampling (SW-TS), for nonstationary stochastic MAB settings. Our algorithm is based on Thompson Sampling and exploits a sliding-window approach to tackle, in a unified fashion, two different forms of non-stationarity studied separately so far: abruptly changing and smoothly changing. In the former, the reward distributions are constant during sequences of rounds, and their change may be arbitrary and happen at unknown rounds, while, in the latter, the reward distributions smoothly evolve over rounds according to unknown dynamics. Under mild assumptions, we provide regret upper bounds on the dynamic pseudo-regret of SW-TS for the abruptly changing environment, for the smoothly changing one, and for the setting in which both the non-stationarity forms are present. Furthermore, we empirically show that SW-TS dramatically outperforms state-of-the-art algorithms even when the forms of non-stationarity are taken separately, as previously studied in the literature.

Genetic Algorithm in Python - Part B - Practical Genetic Algorithms Series


Genetic Algorithms (GAs) are members of a general class of optimization algorithms, known as Evolutionary Algorithms (EAs), which simulate a fictional environment based on theory of evolution to deal with various types of mathematical problem, especially those related to optimization. Also Genetic Algorithms can be categorized as a subset of Metaheuristics, which are general-purpose tools and algorithms to solve optimization and unsupervised learning problems. In this series of video tutorials, we are going to learn about Genetic Algorithms, from theory to implementation. After having a brief review of theories behind EA and GA, two main versions of genetic algorithms, namely Binary Genetic Algorithm and Real-coded Genetic Algorithm, are implemented from scratch and line-by-line, using both Python and MATLAB. This course is instructed by Dr. Mostapha Kalami Heris, who has years of practical work and active teaching in the field of computational intelligence.

From Understanding Genetic Drift to a Smart-Restart Parameter-less Compact Genetic Algorithm Artificial Intelligence

One of the key difficulties in using estimation-of-distribution algorithms is choosing the population sizes appropriately: Too small values lead to genetic drift, which can cause enormous difficulties. In the regime with no genetic drift, however, often the runtime is roughly proportional to the population size, which renders large population sizes inefficient. Based on a recent quantitative analysis which population sizes lead to genetic drift, we propose a parameter-less version of the compact genetic algorithm that automatically finds a suitable population size without spending too much time in situations unfavorable due to genetic drift. We prove an easy mathematical runtime guarantee for this algorithm and conduct an extensive experimental analysis on four classic benchmark problems. The former shows that under a natural assumption, our algorithm has a performance similar to the one obtainable from the best population size. The latter confirms that missing the right population size can be highly detrimental and shows that our algorithm as well as a previously proposed parameter-less one based on parallel runs avoids such pitfalls. Comparing the two approaches, ours profits from its ability to abort runs which are likely to be stuck in a genetic drift situation.