Preserving distributional information in dialogue act classification

    Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review


    This paper introduces a novel training/decoding strategy for sequence labeling.
    Instead of greedily choosing a label at each time step, and using it for
    the next prediction, we retain the probability distribution over the current label,
    and pass this distribution to the next prediction. This approach allows us to avoid the effect of label bias and error propagation in sequence learning/decoding. Our experiments on dialogue act classification demonstrate the effectiveness of this approach. Even though our underlying neural network model is relatively simple, it outperforms more complex neural models, achieving state-of-the-art results on
    the MapTask and Switchboard corpora.
    Original languageEnglish
    Title of host publicationThe Conference on Empirical Methods in Natural Language Processing
    Subtitle of host publicationProceedings of the Conference - September 9-11, 2017, Copenhagen, Denmark
    EditorsRebecca Hwa, Sebastian Riedel
    Place of PublicationStroudsburg PA USA
    PublisherAssociation for Computational Linguistics (ACL)
    Number of pages6
    ISBN (Print)9781945626838
    Publication statusPublished - 2017
    EventEmpirical Methods in Natural Language Processing 2017 - Copenhagen, Denmark
    Duration: 9 Sep 201711 Sep 2017


    ConferenceEmpirical Methods in Natural Language Processing 2017
    Abbreviated titleEMNLP 2017
    Internet address

    Cite this