Goto

Collaborating Authors

 ddpg





Appendix for Softmax Deep Double Deterministic Policy Gradients Ling Pan

Neural Information Processing Systems

We demonstrate the smoothing effect of SD3 on the optimization landscape in this section, where experimental setup is the same as in Section 4.1 in the text for the comparative study of SD2 and Experimental details can be found in Section B.2. The performance comparison of SD3 and TD3 is shown in Figure 1(a), where SD3 significantly outperforms TD3. So far, we have demonstrated the smoothing effect of SD3 over TD3. Hyperparameters of DDPG and SD2 are summarized in Table 1. Assume that the actor is a local maximizer with respect to the critic.





How Market Volatility Shapes Algorithmic Collusion: A Comparative Analysis of Learning-Based Pricing Algorithms

Sravon, Aheer, Ibrahim, Md., Mazumder, Devdyuti, Aziz, Ridwan Al

arXiv.org Artificial Intelligence

The rapid diffusion of autonomous pricing algorithms has reshaped competitive dynamics in digital marketplaces, raising important economic and policy questions about their potential for collusive behavior. A substantial body of research demonstrates that reinforcement-learning (RL) agents can autonomously coordinate on supracompetitive outcomes even in the absence of explicit communication. Foundational contributions--including the work in [1]--show that algorithmic agents may systematically learn tacitly collusive strategies across multiple market structures, with Q-learning in particular generating prices above competitive levels in Logit, Hotelling, and linear demand environments. These concerns are reinforced by seminal work such as [2], which demonstrates that simple Q-learning agents reliably sustain collusion through structured punishment and reward cycles in repeated pricing games, as well as by [3], who document how algorithmic systems may generate sudden price spikes in response to high-impact, low-probability events (HILP), unintentionally coordinating on elevated prices. The study of [4] establishes a robust empirical and computational foundation demonstrating that pricing algorithms may autonomously learn to collude. A complementary line of research focuses specifically on Q-learning's capacity to learn collusive equilibria, as documented in papers [2], [5], and [6]. These findings are consistent with the theoretical properties of Q-learning established by [7], who show that the algorithm incrementally learns long-run discounted value-maximizing strategies in sequential decision problems. More recent studies further reveal that deep reinforcement-learning (deep RL) algorithms--including DDQN and SAC--may also display collusive tendencies. For instance, [8] documents that modern RL systems can coordinate on higher-than-competitive prices under a variety of market configurations.