Goto

Collaborating Authors

 scalarization


Mixture-Model Preference Learning for Many-Objective Bayesian Optimization

Dubey, Manisha, De Peuter, Sebastiaan, Wang, Wanrong, Kaski, Samuel

arXiv.org Machine Learning

Preference-based many-objective optimization faces two obstacles: an expanding space of trade-offs and heterogeneous, context-dependent human value structures. Towards this, we propose a Bayesian framework that learns a small set of latent preference archetypes rather than assuming a single fixed utility function, modelling them as components of a Dirichlet-process mixture with uncertainty over both archetypes and their weights. To query efficiently, we designing hybrid queries that target information about (i) mode identity and (ii) within-mode trade-offs. Under mild assumptions, we provide a simple regret guarantee for the resulting mixture-aware Bayesian optimization procedure. Empirically, our method outperforms standard baselines on synthetic and real-world many-objective benchmarks, and mixture-aware diagnostics reveal structure that regret alone fails to capture.







Pareto Multi-Task Learning

Xi Lin, Hui-Ling Zhen, Zhenhua Li, Qing-Fu Zhang, Sam Kwong

Neural Information Processing Systems

Theproposed algorithm first formulates a multi-task learning problem as a multiobjective optimization problem, and then decomposes the multiobjective optimization problem into a set of constrained subproblems with different trade-off preferences.




InDefenseoftheUnitaryScalarization forDeepMulti-TaskLearning

Neural Information Processing Systems

While some workshowsthatmulti-task networkstrained viaunitary scalarization exhibit superior performance to independent per-task models [29, 35], others suggest the opposite [30, 54, 58]. However, SMTOs usually require access to per-task gradients either with respect to the shared parameters, or to the shared representation.