Gradient-Free Methods for Nonconvex Nonsmooth Stochastic Compositional Optimization
–Neural Information Processing Systems
Stochastic compositional optimization (SCO) problems are popular in many realworld applications, including risk management, reinforcement learning, and metalearning. However, most of the previous methods for SCO require the smoothness assumption on both the outer and inner functions, which limits their applications to a wider range of problems. In this paper, we study the SCO problem in that both the outer and inner functions are Lipschitz continuous but possibly nonconvex and nonsmooth. In particular, we propose gradient-free stochastic methods for finding the (δ, ϵ)-Goldstein stationary points of such problems with non-asymptotic convergence rates. Our results also lead to an improved convergence rate for the convex nonsmooth SCO problem. Furthermore, we conduct numerical experiments to demonstrate the effectiveness of the proposed methods.
Neural Information Processing Systems
May-24-2025, 08:29:11 GMT
- Genre:
- Research Report > Experimental Study (0.93)
- Industry:
- Information Technology > Security & Privacy (0.34)
- Technology: