Unsupervised pre-training with Seq2Seq reconstruction loss for deep relation extraction models

Zhuang Li, Lizhen Qu, Qiongkai Xu, Mark Johnson

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearch


Relation extraction models based on deep learning have been attracting a lot of attention recently. Little research is carried out to reduce their need of labeled training
data. In this work, we propose an unsupervised pre-training method based on the sequence-to-sequence model for deep relation extraction models. The pre-trained
models need only half or even less training data to achieve equivalent performance as the same models without pre-training.
Original languageEnglish
Title of host publicationAustralasian Language Technology Association Workshop 2016 - Proceedings of the Workshop
EditorsTrevor Cohn
Place of PublicationCaulfield VIC Australia
PublisherAssociation for Computational Linguistics (ACL)
Number of pages11
Publication statusPublished - 2016
Externally publishedYes
EventAustralasian Language Technology Association Workshop 2016 - Monash University, Melbourne, Australia
Duration: 5 Dec 20167 Dec 2016
Conference number: 14th
https://www.aclweb.org/anthology/events/alta-2016/ (Proceedings)


ConferenceAustralasian Language Technology Association Workshop 2016
Abbreviated titleALTA 2016
OtherALTA 2016 was co-located with ADCS 2016
Internet address

Cite this