Abstract
Task-oriented dialogue system (TOD) is one kind of application of artificial intelligence (AI). The response generation module is a key component of TOD for replying to user's questions and concerns in sequential natural words. In the past few years, the works on response generation have attracted increasing research attention and have seen much progress. However, existing works ignore the fact that not each turn of dialogue history contributes to the dialogue response generation and give little consideration to the different weights of utterances in a dialogue history. In this paper, we propose a hierarchical memory network mechanism with two steps to filter out unnecessary information of dialogue history. First, an utterance-level memory network distributes various weights to each utterance (coarse-grained). Second, a token-level memory network assigns higher weights to keywords based on the former's output (fine-grained). Furthermore, the output of the token-level memory network will be employed to query the knowledge base (KB) to capture the dialogue-related information. In the decoding stage, we take a gated-mechanism to generate response word by word from dialogue history, vocabulary, or KB. Experiments show that the proposed model achieves superior results compared with state-of-the-art models on several public datasets. Further analysis demonstrates the effectiveness of the proposed method and the robustness of the model in the case of an incomplete training set.
| Original language | English |
|---|---|
| Pages (from-to) | 1831-1858 |
| Number of pages | 28 |
| Journal | Computational Intelligence |
| Volume | 38 |
| Issue number | 5 |
| DOIs | |
| Publication status | Published - 26 Jul 2022 |
Keywords
- task-oriented dialogue systems
- memory networks
- deep learning
- natural language processing (NLP)
- natural language generations