Improving a Sequence-To-Sequence NLP Model using a Reinforcement Learning Policy Algorithm
Authors
Jabri Ismail, Aboulbichr Ahmed and El ouaazizi Aziza, Sidi Mohamed Ben Abdallah University, Morocco
Abstract
Nowadays, the current neural network models of dialogue generation(chatbots) show great promise for generating answers for chatty agents. But they are short-sighted in that they predict utterances one at a time while disregarding their impact on future outcomes. Modelling a dialogue’s future direction is critical for generating coherent, interesting dialogues, a need that has led traditional NLP dialogue models that rely on reinforcement learning.
In this article, we explain how to combine these objectives by using deep reinforcement learning to predict future rewards in chatbot dialogue. The model simulates conversations between two virtual agents, with policy gradient methods used to reward sequences that exhibit three useful conversational characteristics: the flow of informality, coherence, and simplicity of response (related to forward-looking function). We assess our model based on its diversity, length, and complexity with regard to humans. In dialogue simulation, evaluations demonstrated that the proposed model generates more interactive responses and encourages a more sustained successful conversation.
This work commemorates a preliminary step toward developing a neural conversational model based on the long-term success of dialogues.
Keywords
Reinforcement learning, SEQ2SEQ model, Chatbot, NLP, Conversational agent.
Full Text :
Abstract URL :
Volume URL :
#nlp #learning #machinelearning #reinforcementlearning #seq2seqmodel #chatbot #nlp #conversationalagent