Disentangled Retrieval and Reasoning for Implicit Question Answering

IEEE Trans Neural Netw Learn Syst. 2022 Nov 15:PP. doi: 10.1109/TNNLS.2022.3220933. Online ahead of print.

Abstract

To date, most of the existing open-domain question answering (QA) methods focus on explicit questions where the reasoning steps are mentioned explicitly in the question. In this article, we study implicit QA where the reasoning steps are not evident in the question. Implicit QA is challenging in two aspects. First, evidence retrieval is difficult since there is little overlap between a question and its required evidence. Second, answer inference is difficult since the reasoning strategy is latent in the question. To tackle implicit QA, we propose a systematic solution denoted as DisentangledQA, which disentangles topic, attribute, and reasoning strategy from the implicit question to guide the retrieval and reasoning. Specifically, we disentangle the topic and attribute information from the implicit question to guide evidence retrieval. For answer reasoning, we propose a disentangled reasoning model for answer prediction based on retrieved evidence as well as the latent representation of the reasoning strategy. The disentangled framework empowers each module to focus on a specific latent element in the question, and thus, leads to effective representation learning for them. Experiments on the StrategyQA dataset demonstrate the effectiveness of our method in answering implicit questions, improving performance in evidence retrieval and answering inference by 31.7% and 4.5%, respectively, and achieving the best performance on the official leaderboard. In addition, our method achieved the best performance on the challenging EntityQuestions dataset, indicating the effectiveness in improving general open-domain QA tasks.