Unsupervised pre-training with sequence reconstruction loss for deep relation extraction models

Zhuang Li, Lizhen Qu, Qiongkai Xu, Mark Johnson

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearch

10 Citations (Scopus)

Abstract

Relation extraction models based on deep learning have been attracting a lot of attention recently. Little research is carried out to reduce their need of labeled training
data. In this work, we propose an unsupervised pre-training method based on the sequence-to-sequence model for deep relation extraction models. The pre-trained
models need only half or even less training data to achieve equivalent performance as the same models without pre-training.
Original languageEnglish
Title of host publicationAustralasian Language Technology Association Workshop 2016 - Proceedings of the Workshop
EditorsTrevor Cohn
Place of PublicationCaulfield VIC Australia
PublisherAssociation for Computational Linguistics (ACL)
Pages54-64
Number of pages11
Volume14
Publication statusPublished - 2016
Externally publishedYes
EventAustralasian Language Technology Association Workshop 2016 - Monash University, Melbourne, Australia
Duration: 5 Dec 20167 Dec 2016
Conference number: 14th
https://www.aclweb.org/anthology/events/alta-2016/ (Proceedings)

Conference

ConferenceAustralasian Language Technology Association Workshop 2016
Abbreviated titleALTA 2016
Country/TerritoryAustralia
CityMelbourne
Period5/12/167/12/16
OtherALTA 2016 was co-located with ADCS 2016
Internet address

Cite this