View on GitHub

NLP-progress

Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.

Relationship Extraction

Relationship extraction is the task of extracting semantic relationships from a text. Extracted relationships usually occur between two or more entities of a certain type (e.g. Person, Organisation, Location) and fall into a number of semantic categories (e.g. married to, employed by, lives in).

Capturing discriminative attributes (SemEval 2018 Task 10)

Capturing discriminative attributes (SemEval 2018 Task 10) is a binary classification task where participants were asked to identify whether an attribute could help discriminate between two concepts. Unlike other word similarity prediction tasks, this task focuses on the semantic differences between words.

e.g. red(attribute) can be used to discriminate apple (concept1) from banana (concept2) -> label 1

More examples:

Attribute concept1 concept2 label
bookcase fridge wood 1
bucket mug round 0
angle curve sharp 1
pelican turtle water 0
wire coil metal 0

Task paper: https://www.aclweb.org/anthology/S18-1117

Task Codalab: https://competitions.codalab.org/competitions/17326

Model Explainability F1 Score Paper / Source Code
SVM with GloVe None 0.76 SUNNYNLP at SemEval-2018 Task 10: A Support-Vector-Machine-Based Method for Detecting Semantic Difference using Taxonomy and Word Embedding Features Author’s
SVM with ConceptNet, Wikipedia articles and WordNet synonyms None 0.74 Luminoso at SemEval-2018 Task 10: Distinguishing Attributes Using Text Corpora and Relational Knowledge Author’s
MLP combining information from various DSMs, PMI, and ConceptNet None 0.73 THU NGN at SemEval-2018 Task 10: Capturing Discriminative Attributes with MLP-CNN model  
Gradient boosting with co-occurrence count features and JoBimText features None 0.73 BomJi at SemEval-2018 Task 10: Combining Vector-, Pattern- and Graph-based Information to Identify Discriminative Attributes  
LexVec, word co-occurrence, and ConceptNet data combined using maximum entropy classifier None 0.72 UWB at SemEval-2018 Task 10: Capturing Discriminative Attributes from Word Distributions Author’s
Composes explicit vector spaces from WordNet Definitions, ConceptNet and Visual Genome Fully Explainable 0.69 Identifying and Explaining Discriminative Attributes Author’s
Word2Vec cosine similarities of WordNet glosses Transp. (No expl.) Transp. (No expl.) 0.69 Meaning space at SemEval-2018 Task 10: Combining explicitly encoded knowledge with information extracted from word embeddings Author’s
Use of Wikipedia and ConceptNet Transp. (No expl.) Transp. (No expl.) 0.69 ELiRF-UPV at SemEval-2018 Task 10: Capturing Discriminative Attributes with Knowledge Graphs and Wikipedia  

FewRel

The Few-Shot Relation Classification Dataset (FewRel) is a different setting from the previous datasets. This dataset consists of 70K sentences expressing 100 relations annotated by crowdworkers on Wikipedia corpus. The few-shot learning task follows the N-way K-shot meta learning setting.

The public leaderboard is available on the FewRel website.

FewRel 2

FewRel 2 extends FewRel on (1) Adaptibility to a new domain with only a hand-ful of instances (2) Ability to detect none-of-the-above relations? The paper is at ACL Web.

The public leaderboard is available on FewRel 2 website

Multi-Way Classification of Semantic Relations Between Pairs of Nominals (SemEval 2010 Task 8)

SemEval-2010 introduced ‘Task 8 - Multi-Way Classification of Semantic Relations Between Pairs of Nominals’. The task is, given a sentence and two tagged nominals, to predict the relation between those nominals and the direction of the relation. The dataset contains nine general semantic relations together with a tenth ‘OTHER’ relation.

Example:

There were apples, pears and oranges in the bowl.

(content-container, pears, bowl)

The main evaluation metric used is macro-averaged F1, averaged across the nine proper relationships (i.e. excluding the OTHER relation), taking directionality of the relation into account.

Several papers have used additional data (e.g. pre-trained word embeddings, WordNet) to improve performance. The figures reported here are the highest achieved by the model using any external resources.

End-to-End Models

Model F1 Paper / Source Code
BERT-based Models      
A-GCN (Tian et al., 2021) 89.85 Dependency-driven Relation Extraction with Attentive Graph Convolutional Networks Official
Matching-the-Blanks (Baldini Soares et al., 2019) 89.5 Matching the Blanks: Distributional Similarity for Relation Learning  
R-BERT (Wu et al. 2019) 89.25 Enriching Pre-trained Language Model with Entity Information for Relation Classification mickeystroller’s Reimplementation
CNN-based Models      
Multi-Attention CNN (Wang et al. 2016) 88.0 Relation Classification via Multi-Level Attention CNNs lawlietAi’s Reimplementation
Attention CNN (Huang and Y Shen, 2016) 84.3
85.9*
Attention-Based Convolutional Neural Network for Semantic Relation Extraction  
CR-CNN (dos Santos et al., 2015) 84.1 Classifying Relations by Ranking with Convolutional Neural Network pratapbhanu’s Reimplementation
CNN (Zeng et al., 2014) 82.7 Relation Classification via Convolutional Deep Neural Network roomylee’s Reimplementation
RNN-based Models      
Entity Attention Bi-LSTM (Lee et al., 2019) 85.2 Semantic Relation Classification via Bidirectional LSTM Networks with Entity-aware Attention using Latent Entity Typing Official
Hierarchical Attention Bi-LSTM (Xiao and C Liu, 2016) 84.3 Semantic Relation Classification via Hierarchical Recurrent Neural Network with Attention  
Attention Bi-LSTM (Zhou et al., 2016) 84.0 Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification SeoSangwoo’s Reimplementation
Bi-LSTM (Zhang et al., 2015) 82.7
84.3*
Bidirectional long short-term memory networks for relation classification  

*: It uses external lexical resources, such as WordNet, part-of-speech tags, dependency tags, and named entity tags.

Dependency Models

Model F1 Paper / Source Code
BRCNN (Cai et al., 2016) 86.3 Bidirectional Recurrent Convolutional Neural Network for Relation Classification  
DRNNs (Xu et al., 2016) 86.1 Improved Relation Classification by Deep Recurrent Neural Networks with Data Augmentation  
depLCNN + NS (Xu et al., 2015a) 85.6 Semantic Relation Classification via Convolutional Neural Networks with Simple Negative Sampling  
SDP-LSTM (Xu et al., 2015b) 83.7 Classifying Relations via Long Short Term Memory Networks along Shortest Dependency Path Sshanu’s Reimplementation
DepNN (Liu et al., 2015) 83.6 A Dependency-Based Neural Network for Relation Classification  
FCN (Yu et al., 2014) 83.0 Factor-based compositional embedding models  
MVRNN (Socher et al., 2012) 82.4 Semantic Compositionality through Recursive Matrix-Vector Spaces pratapbhanu’s Reimplementation

New York Times Corpus

The standard corpus for distantly supervised relationship extraction is the New York Times (NYT) corpus, published in Riedel et al, 2010.

This contains text from the New York Times Annotated Corpus with named entities extracted from the text using the Stanford NER system and automatically linked to entities in the Freebase knowledge base. Pairs of named entities are labelled with relationship types by aligning them against facts in the Freebase knowledge base. (The process of using a separate database to provide label is known as ‘distant supervision’)

Example:

Elevation Partners, the $1.9 billion private equity group that was founded by Roger McNamee

(founded_by, Elevation_Partners, Roger_McNamee)

Different papers have reported various metrics since the release of the dataset, making it difficult to compare systems directly. The main metrics used are either precision at N results or plots of the precision-recall. The range of recall has increased over the years as systems improve, with earlier systems having very low precision at 30% recall.

Model P@10% P@30% Paper / Source Code
KGPOOL (Nadgeri et al., 2021) 92.3 86.7 KGPool: Dynamic Knowledge Graph Context Selection for Relation Extraction KGPOOL
RECON (Bastos et al., 2021) 87.5 74.1 RECON: Relation Extraction using Knowledge Graph Context in a Graph Neural Network RECON
HRERE (Xu et al., 2019) 84.9 72.8 Connecting Language and Knowledge with Heterogeneous Representations for Neural Relation Extraction HRERE
PCNN+noise_convert+cond_opt (Wu et al., 2019) 81.7 61.8 Improving Distantly Supervised Relation Extraction with Neural Noise Converter and Conditional Optimal Selector  
Intra- and Inter-Bag (Ye and Ling, 2019) 78.9 62.4 Distant Supervision Relation Extraction with Intra-Bag and Inter-Bag Attentions Code
RESIDE (Vashishth et al., 2018) 73.6 59.5 RESIDE: Improving Distantly-Supervised Neural Relation Extraction using Side Information RESIDE
PCNN+ATT (Lin et al., 2016) 69.4 51.8 Neural Relation Extraction with Selective Attention over Instances OpenNRE
MIML-RE (Surdeneau et al., 2012) 60.7+ - Multi-instance Multi-label Learning for Relation Extraction Mimlre
MultiR (Hoffman et al., 2011) 60.9+ - Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations MultiR
(Mintz et al., 2009) 39.9+ - Distant supervision for relation extraction without labeled data  

(+) Obtained from results in the paper “Neural Relation Extraction with Selective Attention over Instances”

WikiData dataset for Sentential Relation Extraction

The sentential RE ignores any other occurrence of the given entity pair, thereby making the target relation predictions on the sentence level (Sorokin and Gurevych, 2017). The paper introduces a dataset on Wikidata KG containing 353 relations.

Model F1 Paper / Source Code
KGPOOL (Nadgeri et al., 2021) 88.60 KGPool: Dynamic Knowledge Graph Context Selection for Relation Extraction  
RECON (Bastos et al., 2021) 87.23 RECON: Relation Extraction using Knowledge Graph Context in a Graph Neural Network  
GPGNN (Zhu et al., 2019) 82.29 Graph Neural Networks with Generated Parameters for Relation Extraction  
ContextAware (Sorokin and Gurevych, 2017) 72.07 Context-Aware Representations for Knowledge Base Relation Extraction  

Joint Entity and Relation Extraction

In this task binary relation tuples (two entities and a relation between them) are jointly extracted from sentences. The input to the models is just the sentences and a set of relations, output is a set of relation tuples. Models should extract all relation tuples present in the sentences with full entity names and overlapping entities. F1 score is used to evaluate the models. An extracted tuple is considered as correct if the two entities and the relation match with a ground truth tuple.

NYT29

This dataset is derived from the New York Times dataset of Riedel et al., 2010. It has 29 relations.

Model F1 Paper / Source Code
WDec (Nayak and Ng, 2020) 0.682 Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction PtrNetDecoding4JERE
PNDec (Nayak and Ng, 2020) 0.673 Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction PtrNetDecoding4JERE
HRLRE (Takanobu et at., 2019) 0.643 A Hierarchical Framework for Relation Extraction with Reinforcement Learning HRLRE
NYT24

This dataset is derived from the New York Times dataset of Hoffman et al., 2011. It has 24 relations.

Model F1 Paper / Source Code
WDec (Nayak and Ng, 2020) 0.817 Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction PtrNetDecoding4JERE
PNDec (Nayak and Ng, 2020) 0.789 Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction PtrNetDecoding4JERE
HRLRE (Takanobu et at., 2019) 0.776 A Hierarchical Framework for Relation Extraction with Reinforcement Learning HRLRE

TACRED

TACRED is a large-scale relation extraction dataset with 106,264 examples built over newswire and web text from the corpus used in the yearly TAC Knowledge Base Population (TAC KBP) challenges. Examples in TACRED cover 41 relation types as used in the TAC KBP challenges (e.g., per:schools_attended and org:members) or are labeled as no_relation if no defined relation is held. These examples are created by combining available human annotations from the TAC KBP challenges and crowdsourcing.

Example:

Billy Mays, the bearded, boisterious pitchman who, as the undisputed king of TV yell and sell, became an inlikely pop culture icon, died at his home in Tampa, Fla, on Sunday.

(per:city_of_death, Billy Mays, Tampa)

The main evaluation metric used is micro-averaged F1 over instances with proper relationships (i.e. excluding the no_relation type).

Model F1 Paper / Source Code
LUKE (Yamada et al., 2020) 72.7 LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention Official
Matching-the-Blanks (Baldini Soares et al., 2019) 71.5 Matching the Blanks: Distributional Similarity for Relation Learning  
C-GCN + PA-LSTM (Zhang et al. 2018) 68.2 Graph Convolution over Pruned Dependency Trees Improves Relation Extraction Offical
PA-LSTM (Zhang et al, 2017) 65.1 Position-aware Attention and Supervised Data Improve Slot Filling Official

Go back to the README