TY - GEN
T1 - MelBERT
T2 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021
AU - Choi, Minjin
AU - Lee, Sunkyung
AU - Choi, Eunseong
AU - Park, Heesoo
AU - Lee, Junhyuk
AU - Lee, Dongwon
AU - Lee, Jongwuk
N1 - Funding Information:
Potential Individuals Global Training Program). The work of Dongwon Lee was in part supported by NSF awards #1742702, #1820609, #1909702, #1915801, and #1934782.
Funding Information:
This work was supported by the National Research Foundation of Korea (NRF) (NRF-2018R1A5A1060031). Also, this work was supported by the Institute of Information & communications Technology Planning & evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00421, AI Graduate School Support Program and No.2019-0-01590, High-Potential Individuals Global Training Program). The work of Dongwon Lee was in part supported by NSF awards #1742702, #1820609, #1909702, #1915801, and #1934782.
Publisher Copyright:
© 2021 Association for Computational Linguistics.
PY - 2021
Y1 - 2021
N2 - Automated metaphor detection is a challenging task to identify the metaphorical expression of words in a sentence. To tackle this problem, we adopt pre-trained contextualized models, e.g., BERT and RoBERTa. To this end, we propose a novel metaphor detection model, namely metaphor-aware late interaction over BERT (MelBERT). Our model not only leverages contextualized word representation but also benefits from linguistic metaphor identification theories to detect whether the target word is metaphorical. Our empirical results demonstrate that MelBERT outperforms several strong baselines on four benchmark datasets, i.e., VUA-18, VUA-20, MOH-X, and TroFi.
AB - Automated metaphor detection is a challenging task to identify the metaphorical expression of words in a sentence. To tackle this problem, we adopt pre-trained contextualized models, e.g., BERT and RoBERTa. To this end, we propose a novel metaphor detection model, namely metaphor-aware late interaction over BERT (MelBERT). Our model not only leverages contextualized word representation but also benefits from linguistic metaphor identification theories to detect whether the target word is metaphorical. Our empirical results demonstrate that MelBERT outperforms several strong baselines on four benchmark datasets, i.e., VUA-18, VUA-20, MOH-X, and TroFi.
UR - http://www.scopus.com/inward/record.url?scp=85137683065&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85137683065&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85137683065
T3 - NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference
SP - 1763
EP - 1773
BT - NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
Y2 - 6 June 2021 through 11 June 2021
ER -