Goto

Collaborating Authors

AuxiliaryTaskReweightingfor Minimum-dataLearning

Neural Information Processing Systems

Supervised learning requires a large amount of training data, limiting its application where labeled data is scarce. To compensate for data scarcity, one possible method is to utilize auxiliary tasks to provide additional supervision for the main task. Assigning and optimizing the importance weights for different auxiliary tasks remains an crucial and largely understudied research question. In this work, we propose a method to automatically reweight auxiliary tasks in order to reduce the data requirement on the main task. Specifically, we formulate the weighted likelihood function of auxiliary tasks as a surrogate prior for the main task. By adjusting the auxiliary task weights to minimize the divergence between the surrogate prior and the true prior ofthe main task, we obtain amore accurate prior estimation, achieving the goal of minimizing the required amount of training data for the main task and avoiding a costly grid search.





41bacf567aefc61b3076c74d8925128f-Paper.pdf

Neural Information Processing Systems

Hypergraphs are important objects to model ternary or higher-order relations of objects, and haveanumber ofapplications inanalysing manycomplexdatasets occurring in practice.




WT-MVSNet: Window-basedTransformersfor Multi-viewStereo

Neural Information Processing Systems

Arecenteffort toperform attention-based matching alongtheepipolar linesofsourceimages [32],suffersinstead from sensitivity to inaccurate camera pose and calibration, which can in turn results to erroneous matching. Another key step in contemporary learned MVS methods is the regularization of cost volume, generated by stacking cost maps associated with respective depth hypotheses.



AProvablyEfficientSampleCollectionStrategy forReinforcementLearning

Neural Information Processing Systems

One of the challenges inonline reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior. Whether we optimize for regret, sample complexity, state-space coverage or model estimation, we need to strike a different exploration-exploitation trade-off.