Abstract
Relation extraction models based on deep learning have been attracting a lot of attention recently. Little research is carried out to reduce their need of labeled training
data. In this work, we propose an unsupervised pre-training method based on the sequence-to-sequence model for deep relation extraction models. The pre-trained
models need only half or even less training data to achieve equivalent performance as the same models without pre-training.
data. In this work, we propose an unsupervised pre-training method based on the sequence-to-sequence model for deep relation extraction models. The pre-trained
models need only half or even less training data to achieve equivalent performance as the same models without pre-training.
Original language | English |
---|---|
Title of host publication | Australasian Language Technology Association Workshop 2016 - Proceedings of the Workshop |
Editors | Trevor Cohn |
Place of Publication | Caulfield VIC Australia |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 54-64 |
Number of pages | 11 |
Volume | 14 |
Publication status | Published - 2016 |
Externally published | Yes |
Event | Australasian Language Technology Association Workshop 2016 - Monash University, Melbourne, Australia Duration: 5 Dec 2016 → 7 Dec 2016 Conference number: 14th https://www.aclweb.org/anthology/events/alta-2016/ (Proceedings) |
Conference
Conference | Australasian Language Technology Association Workshop 2016 |
---|---|
Abbreviated title | ALTA 2016 |
Country/Territory | Australia |
City | Melbourne |
Period | 5/12/16 → 7/12/16 |
Other | ALTA 2016 was co-located with ADCS 2016 |
Internet address |
|