Research from the NLP & Speech Group Accepted to ACL 2022

CS researchers presented their work at the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022).

Faithful or Extractive? On Mitigating the Faithfulness-Abstractiveness Trade-off in Abstractive Summarization
Faisal Ladhak Columbia University, Esin Durmus Stanford University, He He New York University, Claire Cardie Cornell University, Kathleen McKeown Columbia University

Abstract
Despite recent progress in abstractive summarization, systems still suffer from faithfulness errors. While prior work has proposed models that improve faithfulness, it is unclear whether the improvement comes from an increased level of extractiveness of the model outputs as one naive way to improve faithfulness is to make summarization models more extractive. In this work, we present a framework for evaluating the effective faithfulness of summarization systems, by generating a faithfulness-abstractiveness trade-off curve that serves as a control at different operating points on the abstractiveness spectrum. We then show that the baseline system as well as recently proposed methods for improving faithfulness, fail to consistently improve over the control at the same level of abstractiveness.

Finally, we learn a selector to identify the most faithful and abstractive summary for a given document, and show that this system can attain higher faithfulness scores in human evaluations while being more abstractive than the baseline system on two datasets. Moreover, we show that our system is able to achieve a better faithfulness-abstractiveness trade-off than the control at the same level of abstractiveness.

Towards Learning (Dis)-Similarity of Source Code from Program Contrasts
Yangruibo Ding Columbia University, Luca Buratti IBM Research, Saurabh Pujar IBM Research, Alessandro Morari IBM Research, Baishakhi Ray Columbia University, and Saikat Chakraborty Columbia University

Understanding the functional (dis)-similarity of source code is significant for code modeling tasks such as software vulnerability and code clone detection. We present DISCO (DISsimilarity of COde), a novel self-supervised model focusing on identifying (dis)similar functionalities of source code. Different from existing works, our approach does not require a huge amount of randomly collected datasets. Rather, we design structure-guided code transformation algorithms to generate synthetic code clones and inject real-world security bugs, augmenting the collected datasets in a targeted way.

Abstract
We propose to pre-train the Transformer model with such automatically generated program contrasts to better identify similar code in the wild and differentiate vulnerable programs from benign ones. To better capture the structural features of source code, we propose a new cloze objective to encode the local tree-based context (e.g., parents or sibling nodes). We pre-train our model with a much smaller dataset, the size of which is only 5% of the state-of-the-art models’ training datasets, to illustrate the effectiveness of our data augmentation and the pre-training approach. The evaluation shows that, even with much less data, DISCO can still outperform the state-of-the-art models in vulnerability and code clone detection tasks.

Fantastic Questions and Where to Find Them: FairytaleQA — An Authentic Dataset for Narrative Comprehension
Ying Xu University of California Irvine, Dakuo Wang IBM Research, Mo Yu WeChat AI/Tencent, Daniel Ritchie University of California Irvine, Bingsheng Yao Rensselaer Polytechnic Institute, Tongshuang Wu University of Washington, Zheng Zhang University of Notre Dame, Toby Jia-Jun Li University of Notre Dame, Nora Bradford University of California Irvine, Branda Sun University of California Irvine, Tran Bao Hoang University of California Irvine, Yisi Sang Syracuse University, Yufang Hou IBM Research Ireland, Xiaojuan Ma Hong Kong University of Science and Technology, Diyi Yang Georgia Institute of Technology, Nanyun Peng University of California Los Angeles, Zhou Yu Columbia University, Mark Warschauer University of California Irvine

Abstract
Question answering (QA) is a fundamental means to facilitate the assessment and training of narrative comprehension skills for both machines and young children, yet there is scarcity of high-quality QA datasets carefully designed to serve this purpose. In particular, existing datasets rarely distinguish fine-grained reading skills, such as the understanding of varying narrative elements. Drawing on the reading education research, we introduce FairytaleQA1, a dataset focusing on the narrative comprehension of kindergarten to eighth-grade students.

Generated by educational experts based on an evidence-based theoretical framework, FairytaleQA consists of 10,580 explicit and implicit questions derived from 278 children-friendly stories, covering seven types of narrative elements or relations. Our dataset is valuable in two folds: First, we ran existing QA models on our dataset and confirmed that this annotation helps assess models’ fine-grained learning skills. Second, the dataset supports question generation (QG) task in the education domain. Through benchmarking with QG models, we show that the QG model trained on FairytaleQA is capable of asking high-quality and more diverse questions.

Effective Unsupervised Constrained Text Generation based on Perturbed Masking
Yingwen Fu Guangdong University of Foreign Studies/NetEase Games AI Lab, Wenjie Ou NetEase Games AI Lab, Zhou Yu Columbia University, Yue Lin NetEase Games AI Lab

Abstract
Unsupervised constrained text generation aims to generate text under a given set of constraints without any supervised data. Current state-of-the-art methods stochastically sample edit positions and actions, which may cause unnecessary search steps.

In this paper, we propose PMCTG to improve effectiveness by searching for the best edit position and action in each step. Specifically, PMCTG extends perturbed masking technique to effectively search for the most incongruent token to edit. Then it introduces four multi-aspect scoring functions to select edit action to further reduce search difficulty. Since PMCTG does not require supervised data, it could be applied to different generation tasks. We show that under the unsupervised setting, PMCTG achieves new state-of-the-art results in two representative tasks, namely keywords-to-sentence generation and paraphrasing.