7 CS Papers Accepted to NAACL 2021

Share

Research papers from the Natural Language Processing and Speech groups have been accepted to the 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2021). 

Adversarial Learning for Zero-Shot Stance Detection on Social Media
Emily Allaway Columbia University, Malavika Srikanth Columbia University, and Kathleen McKeown Columbia University

Abstract
Stance detection on social media can help to identify and understand slanted news or commentary in everyday life. In this work, we propose a new model for zero-shot stance detection on Twitter that uses adversarial learning to generalize across topics. Our model achieves state-of-the-art performance on a number of unseen test topics with minimal computational costs. In addition, we extend zero-shot stance detection to new topics, highlighting future directions for zero-shot transfer.


Supporting Clustering with Contrastive Learning
Dejiao Zhang AWS AI, Feng Nan AWS AI, Xiaokai Wei AWS AI, Shang-Wen Li AWS AI, Henghui Zhu AWS AI, Kathleen McKeown Columbia University, Ramesh Nallapati AWS AI, Andrew O. Arnold AWS AI, and Bing Xiang AWS AI

Abstract
Unsupervised clustering aims at discovering the semantic categories of data according to some distance measured in the representation space. However, different categories often overlap with each other in the representation space at the beginning of the learning process, which poses a significant challenge for distance-based clustering in achieving good separation between different categories. To this end, we propose Supporting Clustering with Contrastive Learning (SCCL) – a novel framework to leverage contrastive learning to promote better separation. We assess the performance of SCCL on short text clustering and show that SCCL significantly advances the state-of-the-art results on most benchmark datasets with 3%−11% improvement on Accuracy and 4% − 15% improvement on Normalized Mutual Information. Furthermore, our quantitative analysis demonstrates the effectiveness of SCCL in leveraging the strengths of both bottom-up instance discrimination and top-down clustering to achieve better intracluster and inter-cluster distances when evaluated with the ground truth cluster labels.

 

Emotion-Infused Models for Explainable Psychological Stress Detection
Elsbeth Turcan Columbia University, Smaranda Muresan Columbia University, and Kathleen McKeown Columbia University

Abstract
The problem of detecting psychological stress in online posts, and more broadly, of detecting people in distress or in need of help, is a sensitive application for which the ability to interpret models is vital. Here, we present work exploring the use of a semantically related task, emotion detection, for equally competent but more explainable and human-like psychological stress detection as compared to a black-box model. In particular, we explore the use of multi-task learning as well as emotion-based language model fine-tuning. With our emotion-infused models, we see comparable results to state-of-the-art BERT. Our analysis of the words used for prediction shows that our emotion-infused models mirror the psychological components of stress.


ENTRUST: Argument Reframing with Language Models and Entailment
Tuhin Chakrabarty Columbia University, Christopher Hidey Columbia University, and Smaranda Muresan Columbia University

Abstract
Framing involves the positive or negative presentation of an argument or issue depending on the audience and goal of the speaker (Entman, 1983). Differences in lexical framing, the focus of our work, can have large effects on peoples’ opinions and beliefs. To make progress towards reframing arguments for positive effects, we create a dataset and method for this task. We use a lexical resource for connotations to create a parallel corpus and propose a method for argument reframing that combines controllable text generation (positive connotation) with a post-decoding entailment component (same denotation). Our results show that our method is effective compared to strong baselines along the dimensions of fluency, meaning, and trustworthiness/reduction of fear


MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding
Tuhin Chakrabarty Columbia University, Xurui Zhang Tsinghua University, Smaranda Muresan Columbia University, and Nanyun Peng University of California, Los Angeles

Abstract
Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning. In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs. Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus by transforming a large number of metaphorical sentences from the Gutenberg Poetry corpus (Jacobs, 2018) to their literal counterpart using recent advances in masked language modeling coupled with commonsense inference. For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model finetuned on our parallel data to generate high-quality metaphors. Human evaluation on an independent test set of literal statements shows that our best model generates metaphors better than three well-crafted baselines 66% of the time on average. Moreover, a task-based evaluation shows that human-written poems enhanced with metaphors proposed by our model are preferred 68% of the time compared to poems without metaphors.


Leveraging Slot Descriptions for Zero-Shot Cross-Domain Dialogue StateTracking
Zhaojiang Lin The Hong Kong University of Science and Technology, Bing Liu Facebook, Seungwhan Moon Facebook, Paul Crook Facebook, Zhenpeng Zhou Facebook, Zhiguang Wang Facebook, Zhou Yu Columbia University, Andrea Madotto The Hong Kong University of Science and Technology, Eunjoon Cho Facebook, and Rajen Subba Facebook

Abstract
Zero-shot cross-domain dialogue state tracking (DST) enables us to handle task-oriented dialogue in unseen domains without the expense of collecting in-domain data. In this paper, we propose a slot description enhanced generative approach for zero-shot cross-domain DST. Specifically, our model first encodes dialogue context and slots with a pre-trained self-attentive encoder and generates slot values in an auto-regressive manner. In addition, we incorporate Slot Type Informed Descriptions that capture the shared information across slots to facilitate cross-domain knowledge transfer. Experimental results on the MultiWOZ dataset show that our proposed method significantly improves existing state-of-the-art results in the zero-shot cross-domain setting.

 

Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems
Derek Chen ASAPP, Howard Chen ASAPP, Yi Yang ASAPP, Alexander Lin ASAPP, and Zhou Yu Columbia University

Abstract
Existing goal-oriented dialogue datasets focus mainly on identifying slots and values. However, customer support interactions in reality often involve agents following multi-step procedures derived from explicitly defined company policies as well. To study customer service dialogue systems in more realistic settings, we introduce the Action-Based Conversations Dataset (ABCD), a fully-labeled dataset with over 10K human-to-human dialogues containing 55 distinct user intents requiring unique sequences of actions constrained by policies to achieve task success. We propose two additional dialog tasks, Action State Tracking and Cascading Dialogue Success, and establish a series of baselines involving large-scale, pre-trained language models on this dataset. Empirical results demonstrate that while more sophisticated networks outperform simpler models, a considerable gap (50.8% absolute accuracy) still exists to reach human-level performance on ABCD.

 

Self-Training with Weak Supervision
Giannis Karamanolakis Columbia University, Subhabrata Mukherjee Microsoft Research, Guoqing Zheng Microsoft Research, Ahmed Hassan Awadallah Microsoft Research

Abstract
State-of-the-art deep neural networks require large-scale labeled training data that is often expensive to obtain or not available for many tasks. Weak supervision in the form of domain-specific rules has been shown to be useful in such settings to automatically generate weakly labeled training data. However, learning with weak rules is challenging due to their inherent heuristic and noisy nature. An additional challenge is rule coverage and overlap, where prior work on weak supervision only considers instances that are covered by weak rules, thus leaving valuable unlabeled data behind. In this work, we develop a weak supervision framework (ASTRA1 ) that leverages all the available data for a given task. To this end, we leverage task-specific unlabeled data through self-training with a model (student) that considers contextualized representations and predicts pseudo-labels for instances that may not be covered by weak rules. We further develop a rule attention network (teacher) that learns how to aggregate student pseudo-labels with weak rule labels, conditioned on their fidelity and the underlying context of an instance. Finally, we construct a semi-supervised learning objective for end-to-end training with unlabeled data, domain-specific rules, and a small amount of labeled data. Extensive experiments on six benchmark datasets for text classification demonstrate the effectiveness of our approach with significant improvements over state-of-the-art baselines.

Share