A Generalized Alternating Method for Bilevel Optimization under the Polyak-Łojasiewicz Condition
–Neural Information Processing Systems
Bilevel optimization has recently regained interest owing to its applications in emerging machine learning fields such as hyperparameter optimization, metalearning, and reinforcement learning. Recent results have shown that simple alternating (implicit) gradient-based algorithms can match the convergence rate of single-level gradient descent (GD) when addressing bilevel problems with a strongly convex lower-level objective. However, it remains unclear whether this result can be generalized to bilevel problems beyond this basic setting. In this paper, we first introduce a stationary metric for the considered bilevel problems, which generalizes the existing metric, for a nonconvex lower-level objective that satisfies the Polyak-Łojasiewicz (PL) condition.
Neural Information Processing Systems
May-25-2025, 11:43:07 GMT