PlumX Metrics
Embed PlumX Metrics

Similar norm more transferable: Rethinking feature norms discrepancy in adversarial domain adaptation

Knowledge-Based Systems, ISSN: 0950-7051, Vol: 296, Page: 111908
2024
  • 7
    Citations
  • 0
    Usage
  • 2
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

Article Description

Adversarial learning has become an effective paradigm for learning transferable features in domain adaptation. However, many previous adversarial domain adaptation methods inevitably damage the discriminative information contained in transferable features, which limits the potential of adversarial learning. In this paper, we explore the reason for this phenomenon and find that the model pays more attention to the alignment of feature norms than the learning of domain-invariant features during adversarial adaptation. Moreover, we observe that the feature norms contain some crucial category information, which is ignored in previous studies. To achieve better adversarial adaptation, we propose two novel feature norms alignment strategies: Histogram-guided Norms Alignment (HNA) and Transport-guided Norms Alignment (TNA). Both strategies model the feature norms from the distribution perspective, which not only facilitates the reduction of the norms discrepancy but also makes full use of discriminative information contained in the norms. Extensive experiments demonstrate that progressively aligning the feature norms distributions of two domains can effectively promote the capture of semantically rich shared features and significantly boost the model’s transfer performance. We hope our findings can shed some light on future research of adversarial domain adaptation.

Bibliographic Details

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know