Preserving distributional information in dialogue act classification

    Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

    Abstract

    This paper introduces a novel training/decoding strategy for sequence labeling.
    Instead of greedily choosing a label at each time step, and using it for
    the next prediction, we retain the probability distribution over the current label,
    and pass this distribution to the next prediction. This approach allows us to avoid the effect of label bias and error propagation in sequence learning/decoding. Our experiments on dialogue act classification demonstrate the effectiveness of this approach. Even though our underlying neural network model is relatively simple, it outperforms more complex neural models, achieving state-of-the-art results on
    the MapTask and Switchboard corpora.
    Original languageEnglish
    Title of host publicationThe Conference on Empirical Methods in Natural Language Processing
    Subtitle of host publicationProceedings of the Conference - September 9-11, 2017, Copenhagen, Denmark
    EditorsRebecca Hwa, Sebastian Riedel
    Place of PublicationStroudsburg PA USA
    PublisherAssociation for Computational Linguistics (ACL)
    Pages2151-2156
    Number of pages6
    ISBN (Print)9781945626838
    DOIs
    Publication statusPublished - 2017
    EventEmpirical Methods in Natural Language Processing 2017 - Copenhagen, Denmark
    Duration: 9 Sept 201711 Sept 2017
    http://www.aclweb.org/anthology/D/D17/

    Conference

    ConferenceEmpirical Methods in Natural Language Processing 2017
    Abbreviated titleEMNLP 2017
    Country/TerritoryDenmark
    CityCopenhagen
    Period9/09/1711/09/17
    Internet address

    Cite this