Meta-collaboration-based semantic contrast for inductive knowledge representation learning
Expert Systems with Applications, ISSN: 0957-4174, Vol: 261, Page: 125421
2025
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
Inductive knowledge representation learning aims at effectively representing new entities in emerging knowledge graphs based on current ones, providing positive support for reasoning facts with newly emerging entities. It requires a scheme that can derive the instant knowledge to generate rational representations for new entities. Recently, some approaches have been developed for it by capturing logical rules between entities, mining structural information of KGs, or learning multiple structural patterns of entities. However, there are two main challenges that they have yet to overcome. The first is the structural bias-induced semantic ambiguity, where new entities are typically represented in a manner akin to those with analogous structures, during which the identical structures are emphasized while the differences are overlooked. We refer to this as the structural bias, which will lead to the generated representations of new entities being semantically ambiguous. The second is the potential sparsity of new entities. The presence of new entities with low frequencies brings about the insufficient representations for these new entities even training a model from scratch with substantial costs. To this end, we propose a Meta-Collaboration-based Semantic Contrast (MCSC) framework with the aim of addressing both issues above. Following the current meta-task formulation scheme, we first acquire the relation-specific knowledge to produce entity representations through modeling relations as feedback. Then, a multi-layer graph neural network module is developed to aggregate sufficient neighbor information to update entity representations. In the final, a collaborative semantic contrast module is designed on both support and query sets to respectively overcome the structural bias-induced semantic ambiguity and the potential sparsity of new entities challenges. Such a meta-collaboration-based semantic contrast procedure transcends current meta-learning paradigms in inductive knowledge representation learning that fail to learn semantically discriminative entity representations. Experimental results of the inductive link prediction task on 12 benchmark datasets demonstrate the superiority of MCSC over SOTA baselines.
Bibliographic Details
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know