PlumX Metrics
Embed PlumX Metrics

Dual-Channel and Hierarchical Graph Convolutional Networks for document-level relation extraction

Expert Systems with Applications, ISSN: 0957-4174, Vol: 205, Page: 117678
2022
  • 17
    Citations
  • 0
    Usage
  • 12
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

  • Citations
    17
    • Citation Indexes
      17
  • Captures
    12

Article Description

Document-level relation extraction aims to infer complex semantic relations among entities in an entire document. Compared with the sentence-level relation extraction, document-level relational facts are expressed by multiple mentions across the sentences in a long-distance, requiring excellent reasoning. In this paper, we propose D ual-Channel and H ierarchical G raph C onvolutional N etworks (DHGCN), which constructs three graphs in token-level, mention-level, and entity-level to model complex interactions among different semantic representations across the document. Based on the multi-level graphs, we apply the Graph Convolutional Network (GCN) for each level to aggregate the relevant information scattered throughout the document for better inferring the implicit relations. Moreover, we propose a dual-channel encoder to capture structural and contextual information simultaneously, which also supplies the contextual representation for the higher layer to avoid losing low-dimension information. Our DHGCN yields significant improvements over the state-of-the-art methods by 2.75, 5.5, and 3.5  F1 on DocRED, CDR, and GDA, respectively, which are popular document-level relation extraction datasets. Furthermore, to demonstrate the effectiveness of our method, we evaluate DHGCN on a fine gained clinical document-level dataset Symptom-Acupoint Relation (SAR) proposed by ourselves and available at https://github.com/QiSun123/SAR. The experimental results illustrate that DHGCN is able to infer more valuable relations among entities in the document.

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know