This paper presents a deep linguistic attentional framework which incorporates word level concept information into neural classification models. While learning neural classification models often requires a large amount of labelled data, linguistic concept information can be obtained from external knowledge, such as pre-trained word embeddings, WordNet for common text and MetaMap for biomedical text. We explore two different ways of incorporating word level concept annotations, and show that leveraging concept annotation scan boost the model performance and reduce the need for large amounts of labelled data. Experiments on various data sets validate the effectiveness of the proposed method.
|Title of host publication||Australasian Language Technology Association Workshop, ALTA 2017|
|Subtitle of host publication||6–8 December 2017, Brisbane, Australia, Proceedings|
|Editors||Jojo Sze-Meng Wong, Gholamreza Haffari|
|Place of Publication||Melbourne Victoria Australia|
|Publisher||Australian Language Technology Association (ALTA)|
|Number of pages||9|
|Publication status||Published - 2017|
|Event||Australasian Language Technology Association Workshop 2017 - Queensland University of Technology, Brisbane, Australia|
Duration: 6 Dec 2017 → 8 Dec 2017
Conference number: 15th
|Conference||Australasian Language Technology Association Workshop 2017|
|Abbreviated title||ALTAW 2017|
|Period||6/12/17 → 8/12/17|
|Other||In 2017, the Australasian Language Technology Association Workshop (ALTA) will be held at the Queensland University of Technology in Brisbane.|
The workshop is the key forum in Australia and New Zealand for sharing research results in natural language processing and computational linguistics, with featured keynote speakers, as well as presentations and posters from student, industry, and early-career researchers.
ALTA 2017 is co-located with ADCS 2017.