Review for NeurIPS paper: Co-Tuning for Transfer Learning
–Neural Information Processing Systems
I am changing my score up a bit General comments: 1) The process seems like a two step (as opposed learning end to end) - first derive the connection of source and target labels (train a separate network to do this), and then using this connection, train a target model while requiring the output (target labels) to conform to this derived connection. Are both steps happening on the same target dataset? Not clear whether it works when number of target classes is larger than number of source classes 3) Authors state that their setting is when source data is not available, but actually their calibration requires the source data. Alternatively, neural net g should be able to learn the calibration in theory, as long as enough complexity is used . Experiments: 1) A reasonable baseline would just be source model (full) one or several new layers for the target.
Neural Information Processing Systems
Feb-5-2025, 22:06:50 GMT
- Technology: