Asynchronous stochastic approximations with asymptotically biased errors and deep multi-agent learning

arXiv.org Machine Learning

Asynchronous stochastic approximations are an important class of model-free algorithms that are readily applicable to multi-agent reinforcement learning (RL) and distributed control applications. When the system size is large, the aforementioned algorithms are used in conjunction with function approximations. In this paper, we present a complete analysis, including stability (almost sure boundedness) and convergence, of asynchronous stochastic approximations with asymptotically bounded biased errors, under easily verifiable sufficient conditions. As an application, we analyze the Policy Gradient algorithms and the more general Value Iteration based algorithms with noise. These are popular reinforcement learning algorithms due to their simplicity and effectiveness. Specifically, we analyze the asynchronous approximate counterpart of policy gradient (A2PG) and value iteration (A2VI) schemes. It is shown that the stability of these algorithms remains unaffected when the approximation errors are guaranteed to be asymptotically bounded, although possibly biased. Regarding convergence of A2VI, it is shown to converge to a fixed point of the perturbed Bellman operator when balanced step-sizes are used. Further, a relationship between these fixed points and the approximation errors is established. A similar analysis for A2PG is also presented.


Key U.S. senators want answers on Equifax's massive cyberbreach

USATODAY - Tech Top Stories

Nearly half of all Americans are affected by a cyber security breach at Equifax, one of the nation's three major credit-reporting agencies. Here's how to avoid being a victim. Two key U.S. senators Monday sought detailed information from Equifax about the cyberbreach that potentially compromised the personal information of 143 million U.S. consumers. Sen. Orrin Hatch, R-Utah, who chairs the Senate Committee on Finance, and Sen.Ron Wyden, D-Oregon, the panel's ranking minority member, asked the credit-reporting giant for a timeline of the breach, along with details of Equifax's efforts to quantify the scope of the intrusion and limit consumer harm. They also asked whether records related to the IRS, the Social Security Administration and the Centers for Medicare & Medicaid Services were compromised, and questioned Equifax about its cybersecurity protections and testing procedures.


Trump pullout from Iran deal could spark cyber threats -- as Bolton looks to scrap cybersecurity job

FOX News

Cybersecurity experts warn that Iran's government could retaliate against the U.S. with cyberattacks on critical infrastructure and businesses following President Donald Trump's decision to reimpose sanctions and pull out of an Obama-era nuclear deal. Iran is known for waging cyberattacks during international upheaval and has a history of deploying destructive attacks against its perceived enemies. But concern about possible attacks from Iran come as the Trump administration's national security team is considering eliminating the top White House cybersecurity job. Prior to the nuclear agreement with Iran in 2015, state-run hackers were working to penetrate U.S. chemical, banking and transportation companies, though the efforts largely stopped after the accord was reached. "In the absence of the agreement, that [hacking] restraint could disappear," John Hultquist, director of intelligence analysis at FireEye, told McClatchy DC.


Is this the year 'weaponised' AI bots do battle?

BBC News

Technology of Business has garnered opinions from dozens of companies on what they think will be the dominant global tech trends in 2018. Artificial intelligence (AI) dominates the landscape, closely followed, as ever, by cyber-security. But is AI an enemy or an ally?


Bias in machine learning, and how to stop it - TechRepublic

#artificialintelligence

As AI becomes increasingly interwoven into our lives--fueling our experiences at home, work, and even on the road--it is imperative that we question how and why our machines do what they do. Although most AI operates in a "black box" in which its decision-making process is hidden--think, why did my GPS re-route me?--transparency in AI is essential to building trust in our systems. But that transparency is not all we want: We also need to ensure that AI decision-making is unbiased, in order to fully trust its abilities. The issue of bias in the tech industry is no secret--especially when it comes to the underrepresentation of and pay disparity for women. But bias can also seep into the very data that machine learning uses to train on, influencing the predictions it makes.