Goto

Collaborating Authors

 red




An Empirical Game-Theoretic Analysis of Autonomous Cyber-Defence Agents

Palmer, Gregory, Swaby, Luke, Harrold, Daniel J. B., Stewart, Matthew, Hiles, Alex, Willis, Chris, Miles, Ian, Farmer, Sara

arXiv.org Artificial Intelligence

The recent rise in increasingly sophisticated cyber-attacks raises the need for robust and resilient autonomous cyber-defence (ACD) agents. Given the variety of cyber-attack tactics, techniques and procedures (TTPs) employed, learning approaches that can return generalisable policies are desirable. Meanwhile, the assurance of ACD agents remains an open challenge. We address both challenges via an empirical game-theoretic analysis of deep reinforcement learning (DRL) approaches for ACD using the principled double oracle (DO) algorithm. This algorithm relies on adversaries iteratively learning (approximate) best responses against each others' policies; a computationally expensive endeavour for autonomous cyber operations agents. In this work we introduce and evaluate a theoretically-sound, potential-based reward shaping approach to expedite this process. In addition, given the increasing number of open-source ACD-DRL approaches, we extend the DO formulation to allow for multiple response oracles (MRO), providing a framework for a holistic evaluation of ACD approaches.


K-Nearest Neighbours - GeeksforGeeks

#artificialintelligence

K-Nearest Neighbours is one of the most basic yet essential classification algorithms in Machine Learning. It belongs to the supervised learning domain and finds intense application in pattern recognition, data mining and intrusion detection. It is widely disposable in real-life scenarios since it is non-parametric, meaning, it does not make any underlying assumptions about the distribution of data (as opposed to other algorithms such as GMM, which assume a Gaussian distribution of the given data). We are given some prior data (also called training data), which classifies coordinates into groups identified by an attribute. Now, given another set of data points (also called testing data), allocate these points a group by analyzing the training set.


Boosting Offline Reinforcement Learning via Data Rebalancing

Yue, Yang, Kang, Bingyi, Ma, Xiao, Xu, Zhongwen, Huang, Gao, Yan, Shuicheng

arXiv.org Artificial Intelligence

Offline reinforcement learning (RL) is challenged by the distributional shift between learning policies and datasets. To address this problem, existing works mainly focus on designing sophisticated algorithms to explicitly or implicitly constrain the learned policy to be close to the behavior policy. The constraint applies not only to well-performing actions but also to inferior ones, which limits the performance upper bound of the learned policy. Instead of aligning the densities of two distributions, aligning the supports gives a relaxed constraint while still being able to avoid out-of-distribution actions. Therefore, we propose a simple yet effective method to boost offline RL algorithms based on the observation that resampling a dataset keeps the distribution support unchanged. More specifically, we construct a better behavior policy by resampling each transition in an old dataset according to its episodic return. We dub our method ReD (Return-based Data Rebalance), which can be implemented with less than 10 lines of code change and adds negligible running time. Extensive experiments demonstrate that ReD is effective at boosting offline RL performance and orthogonal to decoupling strategies in long-tailed classification. New state-of-the-arts are achieved on the D4RL benchmark.


Democratization Social trading into Digital Banking using ML - K Nearest Neighbors

#artificialintelligence

Social trading is an alternative way of trading by looking at what other traders are doing and comparing and copying their techniques and strategies. Social trading allows traders to trade online with the help of others and some have claimed shortens the learning curve from novice to experienced trader. By copying trades, traders can learn which strategies work and which do not work. Social trading is used to do speculation; in the moral context speculative practices are considered negatively and to be avoided by each individual who conversely should maintain a long term horizon avoiding any types of short term speculation. For instance, if you look at the eToro, one the biggest Social Trading Platform.