Language for a Civil Forum

Continuing the discussion from Examining "Signature in the Cell":

And finally, of some relevance to this blog:

An algorithm for detecting when online conversations are likely to get ugly

A team of researchers at Cornel University working with the Wikimedia Foundation has come up with a digital framework for detecting when an online discussion is likely to get ugly. In a paper uploaded to the arXiv preprint server, the team describes their approach and how well their algorithm worked during testing…

To solve this problem, the researchers looked at over 1,200 online conversations on the Wikipedia Talk pages looking for linguistic cues. In this context, cues were words that suggested demeanor and level of politeness. In so doing, they found that when people used cues such as “please” and “thanks,” there was less of a chance of things getting ugly. There were also positive phrases, such as “I think” or “I believe” that suggested an attempt to keep things civil, which tended to keep things on an even keel. On the other hand, they also found less helpful cues, such as when conversations started with direct questions or the word “you.” Such cues tended to lead to degradation in civility at some point and, the researchers suggest, are often seen by a reader as hostile and contentious.

Hmm. Food for thought. I think we could all profit by taking these findings to heart (myself included).

1 Like

The Arxiv paper is here: https://arxiv.org/abs/1805.05345

Conversations Gone Awry: Detecting Early Signs of Conversational Failure

Justine Zhang, Jonathan P. Chang, Cristian Danescu-Niculescu-Mizil, Lucas Dixon, Yiqing Hua, Nithum Thain, Dario Taraborelli

One of the main challenges online social systems face is the prevalence of antisocial behavior, such as harassment and personal attacks. In this work, we introduce the task of predicting from the very start of a conversation whether it will get out of hand. As opposed to detecting undesirable behavior after the fact, this task aims to enable early, actionable prediction at a time when the conversation might still be salvaged.
To this end, we develop a framework for capturing pragmatic devices—such as politeness strategies and rhetorical prompts—used to start a conversation, and analyze their relation to its future trajectory. Applying this framework in a controlled setting, we demonstrate the feasibility of detecting early warning signs of antisocial behavior in online discussions.

Hmm, I tend to do that a bit to figure out what people personally thing. Does that really cause problems?