The annual conference of the North American Chapter of the Association for Computational Linguistics (NAACL) is the preeminent event in the field of natural language processing. CS researchers in professor Julia Hirschberg’s group won a Best Paper award for a novel resource, SpatialNet, which provides a formal representation of how a language expresses spatial relations. Other accepted papers are detailed below.
Linguistic Analysis of Schizophrenia in Reddit Posts
Jonathan Zomick Hofstra University, Sarah Ita Levitan Columbia University, Mark Serper Hofstra University Mount Sinai School of Medicine
The paper was presented at the Sixth Annual Workshop on Computational Linguistics and Clinical Psychology, at NAACL.
The researchers identified and analyzed unique linguistic characteristics of Reddit posts written by users who claim to have received a diagnosis for schizophrenia. The findings were interpreted in the context of established schizophrenia symptoms and compared with results from previous research that has looked at schizophrenia and language on social media platforms.
The results showed several differences in language usage between users with schizophrenia and a control group. For example, people with schizophrenia used less punctuation in their Reddit posts. Disorganized language use is a prominent and common symptom of schizophrenia.
A machine learning classifier was trained to automatically identify self-identified users with schizophrenia on Reddit, using linguistic cues.
“We hope that this work contributes toward the ultimate goal of identifying high risk individuals,” said Sara Ita Levitan, a postdoctoral research scientist with the Spoken Language Processing Group. “Early intervention and diagnosis is important to improve overall treatment outcomes for schizophrenia.”
Fixed That for You: Generating Contrastive Claims with Semantic Edits
Christopher Hidey Columbia University and Kathy McKeown Columbia University
For many people, social media is a primary source of information and it can become a key venue for opinionated discussion. In order to evaluate and analyze these discussions, it is important to understand contrast or a difference in opinions.
As a step towards a better understanding of arguments, the researchers developed a method to automatically generate responses to internet comments containing differences in stance. They created a corpus from over one million contrastive claims mined from the social media site Reddit. In order to obtain training data for the models, they extracted pairs of comments containing the acronym FTFY (“fixed that for you”).
For example, in a discussion over who should be the next President of the United States, one participant might state “Bernie Sanders for president” and another might state “Hillary Clinton for president. FTFY”
A neural network model was trained on the pairs to edit the original claim and produce a new claim with a different view.
Claim : Bernie Sanders for president
New claim : Hillary Clinton for president.
“One aspect of this problem that was surprising was that the standard ‘sequence-to-sequence with attention’ baseline performed poorly, often just copying the output or selecting generic responses,” said Christopher Hidey, a fourth year PhD student. While generic response generation is a known problem in neural models, their custom model significantly outperformed this baseline in several metrics including novelty and overlap with human-generated responses.
A Robust Abstractive System for Cross-Lingual Summarization
Jessica Ouyang Columbia University, Boya Song Columbia University, and Kathy McKeown Columbia University
The researchers developed an automatic summarization system that specializes in producing English summaries for documents originally written in three low-resource languages – Somali, Swahili, and Tagalog.
There is little natural language processing work done in low-resource languages and machine translation systems for those languages are of lower quality than those for high-resource languages like French or German.
As a result, the translations are often disfluent and contain errors that make them difficult for a human to understand, much less for a summarization system to process.
An example of machine-translated document
originally written in Swahili :
Mange Kimambi ‘I pray for the parliamentary seat for Kinondoni
constituency for ticket of CCM. Not special seats’ Kinondoni
without drugs is possible I pray for the parliamentary seat for
Kinondoni constituency on the ticket of CCM. Yes, it’s not a
special seats, Khuini Kinondoni, what will I do for Kinondoni?
Tension is many I get but we must remember no good that is
available easily. Kinondoni without drugs is possible. As a friend,
fan or patriotism I urge you to grant your contribution to the
situation and propert. You can use Western Union or money to go
to Mange John Kimambi. Account of CRDB Bank is on blog. Reduce
my profile in my blog understand why I have decided to vie for
Kinondoni constituency. you will understand more.
A standard summarization system’s
output on the document :
Mange Kimambi, who pray for parliamentary seat for Kinondoni
constituency for ticket of CCM, is on blog, and not special seats’
Kinondoni without drugs.
The robust summarization system’s output
on the document :
Mange Kimambi, who pray for parliamentary seat for Kinondoni
constituency for ticket of CCM, comments on his plans to vie for
‘Kinondoni’ without drugs.
“We addressed this challenge by creating large collections of synthetic, errorful “translations” that mimic the output of low-quality machine translations,” said Jessica Ouyang, a seventh year PhD student. They paired the problematic text with high-quality, human-written summaries. The experiment showed that a neural network summarizer trained on this synthetic data was able to correct or elide translation errors and produce fluent English summaries. The error-correcting ability of the system extends to Arabic, a new language previously unseen by the system.
IMHO Fine-Tuning Improves Claim Detection
Tuhin Chakrabarty Columbia University, Christopher Hidey Columbia University, and Kathy McKeown Columbia University
Argument mining, or argumentation mining, is a research area within the natural language processing field. Argument mining is applied in many different genres including the qualitative assessment of social media content (e.g. Twitter, Facebook) – where it provides a powerful tool for policy-makers and researchers in social and political sciences – legal documents, product reviews, scientific articles, online debates, newspaper articles, and dialogical domains. One of the main tasks of argument mining is to detect a claim.
Claims are the central component of an argument. Detecting claims across different domains or data sets can often be challenging due to their varying conceptualization. The researchers set out to alleviate this problem by fine-tuning a language model. They created a corpus mined from Reddit that is composed of 5.5 million opinionated claims. These claims are self-labeled by their authors using the internet acronyms IMO/IMHO or “In My Humble Opinion”.
By fine-tuning the language on the IMHO dataset they were able to obtain a significant improvement on claim detection of the datasets. As these data sets include diverse domains such as social media and student essays, this improvement demonstrates the robustness of fine-tuning on this novel corpus.
The Answer is Language Model Fine-tuning
Tuhin Chakrabarty Columbia University and Smaranda Muresan Columbia University
Community Question Answering forums such as Yahoo! Answers and Quora are popular nowadays, as they represent effective means for communities to share information around particular topics. But the information often shared on these forums may be incorrect or misleading.
The paper presents the ColumbiaNLP submission for the SemEval-2019 Task 8: Fact-Checking in Community Question Answering Forums. The researchers show how fine-tuning a language model on a large unannotated corpus of old threads from the Qatar Living forum helps to classify question types (factual, opinion, socializing) and to judge the factuality of answers on the shared task labeled data from the same forum. Their system finished 4th and 2nd on Subtask A (question type classification) and B (answer factuality prediction), respectively, based on the official metric of accuracy.
Factual : The question is asking for factual information,
which can be answered by checking various information
sources, and it is not ambiguous.
e.g. “What is Ooredoo customer service number?”
Opinion : The question asks for an opinion or advice,
not for a fact.
e.g. “Can anyone recommend a good Vet in Doha?”
Socializing : Not a real question, but intended for
socializing or for chatting. This can also mean expressing
an opinion or sharing some information, without really
asking anything of general interest.
e.g. “What was your first car?”
Factual – TRUE : The answer is True and can be proven
with an external resource.
Q : “I wanted to know if there were any specific shots and
vaccinations I should get before coming over [to Doha].”
A : “Yes there are; though it varies depending on which
country you come from. In the UK; the doctor has a list of
all countries and the vaccinations needed for each.”
Factual – FALSE : The answer gives a factual response, but
it is False, it is partially false or the responder is unsure about
Q : “Can I bring my pitbulls to Qatar?”
A : “Yes, you can bring it but be careful this kind of dog is
Non-Factual : When the answer does not provide factual
information to the question; it can be an opinion or an advice
that cannot be verified
e.g. “It’s better to buy a new one.”
“We show that fine-tuning a language model on a large unsupervised corpus from the same community forum helps us achieve better accuracy for question classification,” said Tuhin Chakrabarty, lead researcher of the paper. Most community question-answering forums have such unlabeled data, which can be used in the absence of large labeled training data.
For answer classification they show how to leverage information from previously answered questions on the thread through language model fine-tuning. Their experiments also show that modeling an answer individually is not the best idea for fact-verification and results are improved when considering the context of the question.
“Determining factuality of answers requires modeling of world knowledge or external evidence – the questions asked are often very noisy and require reformulation,” shared Chakrabarty. “As a future step we would want to incorporate external evidence from the internet in the factual answer classification problem.”