Graph embedding-based heterogeneous domain adaptation with domain-invariant feature learning and distributional order preserving
Neural Networks, ISSN: 0893-6080, Vol: 170, Page: 427-440
2024
- 1Citations
- 3Captures
Metric Options: Counts1 Year3 YearSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
Heterogeneous domain adaptation (HDA) methods leverage prior knowledge from the source domain to train models for the target domain and address the differences in their feature spaces. However, incorrect alignment of categories and distribution structure disruption may be caused by unlabeled target samples during the domain alignment process for most existing methods, resulting in negative transfer. Additionally, the previous works rarely focus on the robustness and interpretability of the model. To address these issues, we propose a novel G raph embedding-based H eterogeneous domain- I nvariant feature learning and D istributional order preserving framework (GHID). Specifically, a bidirectional robust cross-domain alignment graph embedding structure is proposed to globally align two domains, which learns the domain-invariant and discriminative features simultaneously. In addition, the interpretability of the proposed graph structures is demonstrated through two theoretical analyses, which can elucidate the correlation between important samples from a global perspective in heterogeneous domain alignment scenarios. Then, a heterogeneous discriminative distributional order preserving graph embedding structure is designed to preserve the original distribution relationship of each domain to prevent negative transfer. Moreover, the dynamic centroid strategy is incorporated into the graph structures to improve the robustness of the model. Comprehensive experimental results on four benchmarks demonstrate that the proposed method outperforms other state-of-the-art approaches in effectiveness.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S0893608023006718; http://dx.doi.org/10.1016/j.neunet.2023.11.048; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85178465000&origin=inward; http://www.ncbi.nlm.nih.gov/pubmed/38035485; https://linkinghub.elsevier.com/retrieve/pii/S0893608023006718; https://dx.doi.org/10.1016/j.neunet.2023.11.048
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know