Goto

Collaborating Authors

 inneurip



GraphFew-shotLearningwith Task-specificStructures

Neural Information Processing Systems

Graph few-shot learning is of great importance among various graph learning tasks. Under thefew-shot scenario, models areoftenrequired toconduct classification givenlimited labeled samples. Existing graph few-shot learning methods typically leverage Graph Neural Networks (GNNs) and perform classification across a series of meta-tasks. Nevertheless, these methods generally rely on the original graph (i.e., the graph that the meta-task is sampled from) to learn node representations.



DebiasingGraphNeuralNetworksviaLearning DisentangledCausalSubstructure

Neural Information Processing Systems

With the disentangled representations, we synthesize the counterfactual unbiased training samples to further decorrelate causal and bias variables.


AdversarialReweightingforPartial DomainAdaptation

Neural Information Processing Systems

Theconventional closed-set DAmethods generally assume that the source and target domains share the same label space. However, this assumption is often not realistic in practice.


Class-IncrementalLearningviaDualAugmentation

Neural Information Processing Systems

Typically, DNNs suffer from drastic performance degradation of previously learned tasksafterlearning newknowledge, which isawell-documented phenomenon, knownascatastrophic forgetting [8,9,10].


Confident-Anchor-InducedMulti-Source-Free DomainAdaptation

Neural Information Processing Systems

Unsupervised domain adaptation has attracted appealing academic attentions by transferring knowledge from labeled source domain to unlabeled target domain.