A hierarchical neural model for learning sequences of dialogue acts

    Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

    Abstract

    We propose a novel hierarchical Recurrent Neural Network (RNN) for learning sequences of Dialogue Acts (DAs). The input in this task is a sequence of utterances (i.e., conversational contributions) comprising a sequence of tokens, and the output is a sequence of DA labels (one label per utterance). Our model leverages the hierarchical nature of dialogue data by using two nested RNNs that capture long-range dependencies at the dialogue level and the utterance level. This model is combined with an attention mechanism that focuses on salient tokens in utterances. Our experimental results show that our model outperforms strong baselines on two popular datasets, Switchboard and MapTask; and our detailed empirical analysis highlights the impact of each aspect of our model.

    Original languageEnglish
    Title of host publication15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)
    Subtitle of host publicationProceedings of Conference, Volume 1: Long Papers, April 3-7, 2017, Valencia, Spain
    EditorsPhil Blunsom, Alexander Koller
    Place of PublicationStroudsburg, PA
    PublisherAssociation for Computational Linguistics (ACL)
    Pages428-437
    Number of pages10
    Volume1
    ISBN (Electronic)9781510838604
    ISBN (Print)9781945626340
    Publication statusPublished - 2017
    EventEuropean Association of Computational Linguistics Conference 2017 - Valencia, Spain
    Duration: 3 Apr 20177 Apr 2017
    Conference number: 15th

    Conference

    ConferenceEuropean Association of Computational Linguistics Conference 2017
    Abbreviated titleEACL 2017
    CountrySpain
    CityValencia
    Period3/04/177/04/17

    Cite this

    Tran, H. Q., Zukerman, I., & Haffari, G. (2017). A hierarchical neural model for learning sequences of dialogue acts. In P. Blunsom, & A. Koller (Eds.), 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017): Proceedings of Conference, Volume 1: Long Papers, April 3-7, 2017, Valencia, Spain (Vol. 1, pp. 428-437). Stroudsburg, PA: Association for Computational Linguistics (ACL).
    Tran, Hung Quan ; Zukerman, Ingrid ; Haffari, Gholamreza. / A hierarchical neural model for learning sequences of dialogue acts. 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017): Proceedings of Conference, Volume 1: Long Papers, April 3-7, 2017, Valencia, Spain. editor / Phil Blunsom ; Alexander Koller. Vol. 1 Stroudsburg, PA : Association for Computational Linguistics (ACL), 2017. pp. 428-437
    @inproceedings{42bdb93d25ed468a9e944ee4ef59dddb,
    title = "A hierarchical neural model for learning sequences of dialogue acts",
    abstract = "We propose a novel hierarchical Recurrent Neural Network (RNN) for learning sequences of Dialogue Acts (DAs). The input in this task is a sequence of utterances (i.e., conversational contributions) comprising a sequence of tokens, and the output is a sequence of DA labels (one label per utterance). Our model leverages the hierarchical nature of dialogue data by using two nested RNNs that capture long-range dependencies at the dialogue level and the utterance level. This model is combined with an attention mechanism that focuses on salient tokens in utterances. Our experimental results show that our model outperforms strong baselines on two popular datasets, Switchboard and MapTask; and our detailed empirical analysis highlights the impact of each aspect of our model.",
    author = "Tran, {Hung Quan} and Ingrid Zukerman and Gholamreza Haffari",
    year = "2017",
    language = "English",
    isbn = "9781945626340",
    volume = "1",
    pages = "428--437",
    editor = "Phil Blunsom and Alexander Koller",
    booktitle = "15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)",
    publisher = "Association for Computational Linguistics (ACL)",

    }

    Tran, HQ, Zukerman, I & Haffari, G 2017, A hierarchical neural model for learning sequences of dialogue acts. in P Blunsom & A Koller (eds), 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017): Proceedings of Conference, Volume 1: Long Papers, April 3-7, 2017, Valencia, Spain. vol. 1, Association for Computational Linguistics (ACL), Stroudsburg, PA, pp. 428-437, European Association of Computational Linguistics Conference 2017, Valencia, Spain, 3/04/17.

    A hierarchical neural model for learning sequences of dialogue acts. / Tran, Hung Quan; Zukerman, Ingrid; Haffari, Gholamreza.

    15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017): Proceedings of Conference, Volume 1: Long Papers, April 3-7, 2017, Valencia, Spain. ed. / Phil Blunsom; Alexander Koller. Vol. 1 Stroudsburg, PA : Association for Computational Linguistics (ACL), 2017. p. 428-437.

    Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

    TY - GEN

    T1 - A hierarchical neural model for learning sequences of dialogue acts

    AU - Tran, Hung Quan

    AU - Zukerman, Ingrid

    AU - Haffari, Gholamreza

    PY - 2017

    Y1 - 2017

    N2 - We propose a novel hierarchical Recurrent Neural Network (RNN) for learning sequences of Dialogue Acts (DAs). The input in this task is a sequence of utterances (i.e., conversational contributions) comprising a sequence of tokens, and the output is a sequence of DA labels (one label per utterance). Our model leverages the hierarchical nature of dialogue data by using two nested RNNs that capture long-range dependencies at the dialogue level and the utterance level. This model is combined with an attention mechanism that focuses on salient tokens in utterances. Our experimental results show that our model outperforms strong baselines on two popular datasets, Switchboard and MapTask; and our detailed empirical analysis highlights the impact of each aspect of our model.

    AB - We propose a novel hierarchical Recurrent Neural Network (RNN) for learning sequences of Dialogue Acts (DAs). The input in this task is a sequence of utterances (i.e., conversational contributions) comprising a sequence of tokens, and the output is a sequence of DA labels (one label per utterance). Our model leverages the hierarchical nature of dialogue data by using two nested RNNs that capture long-range dependencies at the dialogue level and the utterance level. This model is combined with an attention mechanism that focuses on salient tokens in utterances. Our experimental results show that our model outperforms strong baselines on two popular datasets, Switchboard and MapTask; and our detailed empirical analysis highlights the impact of each aspect of our model.

    UR - http://www.scopus.com/inward/record.url?scp=85021652736&partnerID=8YFLogxK

    M3 - Conference Paper

    SN - 9781945626340

    VL - 1

    SP - 428

    EP - 437

    BT - 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)

    A2 - Blunsom, Phil

    A2 - Koller, Alexander

    PB - Association for Computational Linguistics (ACL)

    CY - Stroudsburg, PA

    ER -

    Tran HQ, Zukerman I, Haffari G. A hierarchical neural model for learning sequences of dialogue acts. In Blunsom P, Koller A, editors, 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017): Proceedings of Conference, Volume 1: Long Papers, April 3-7, 2017, Valencia, Spain. Vol. 1. Stroudsburg, PA: Association for Computational Linguistics (ACL). 2017. p. 428-437