Trolls can undermine the trust and integrity of an online community and prevent many from voicing their opinions. In the search for a balance between free speech and safety, people are looking to tech companies for solutions. Where AI seems like the ideal quick-fix, human moderation still appears to be the most effective way of combatting online harassment.
Recently, we published an article about how anonymous brainstorming sessions can help spark innovation in a business. We argued that the main reason employees should mask their identities when proposing creative ideas is that they are uninhibited by biases (both their own and those of others) so they’re more open to expressing and receiving new ideas.
Of course, the immediate question clients ask after learning that our platform is anonymous is “do you control for trolls?”
Yes. Yes we do.
What are Internet Trolls?
For all its upsides (fostering community, supporting conversation, dispersing knowledge), we know the internet can be an unfriendly place. Snugged up under the cloak of anonymity, people often experience what psychologists call the “online disinhibition effect”. This is the feeling of being unburdened by the restraints of “acceptable” social behaviour, kept in check by the fear of risking one’s reputation… and it can be both a good and bad thing. The online disinhibition effect can encourage expression in those who are shy to voice their thoughts, often resulting in more innovative ideas. The flip-side, however, is that it can also invoke an impulse to poke the bear.
In internet slang, a person who exhibits this kind of provocative behaviour is called a “troll”. Trolls disrupt productive discussions, dole out bad advice and encourage arguments by posting inflammatory, untrue or off-topic comments with the aim of getting a rise out of people.
Whether the word is rooted in a fishing technique (trolling for fish is the act of slowly dragging a lure or baited hook from a moving boat) or Scandinavian folklore (where trolls are petulant, dim-witted creatures that make life difficult for travellers), or both is uncertain. What’s undeniable is that internet trolls have a massive platform and can cause major destruction.
Who is Affected by Trolls?
According to a Pew Research Center survey published last year, 41% of Americans have been personally subjected to online harassment, from name-calling to stalking to full-on privacy violations and threats of violence. 66% of the population have witnessed these behaviors directed toward others. The result? Self-censorship. Approximately one-quarter of Americans say they’ve decided not to post something online after observing the harassment of others. Over one-in-ten say they stopped using an online service completely after witnessing intimidation.
Tech critics interpret these results as evidence that not everyone’s voices are equal online. In fact, many of those who are harassed online say that they were targeted due to their gender, race, ethnicity or sexual orientation. This implies that those who are marginalized are “being systematically dissuaded from participating”. But where some people’s freedom of speech is being obstructed through fear of those who abuse the right, others believe outright censorship of offensive comments is unconstitutional.
Preventing Online Harassment
Still, the overwhelming consensus (according to 79% of Americans) is that it is the duty of online platforms to step in when they see harassment on their sites. In an article for the New York Times, author Brian X. Chen, claims that “you as an internet user have little power over content you find offensive or harmful online. It’s the tech companies that hold the cards”.
So how do tech companies combat online harassment? Well, 35% of the population believe that better policies and tools could help online platforms address trolls more effectively. Already some heavy-hitting internet-based businesses are attempting to curb the presence of trolls on their sites. Last year, Twitter enacted new policies to retain more user data in order to prevent known harassers from deleting their accounts and returning under a new username. Additionally, it modified its algorithms to flag potentially abusive or unsavoury tweets. Google is also climbing aboard the anti-harassment train with their artificial intelligence tool, Perspective, which scans online content and evaluates its “toxicity” based on ratings by thousands of people.
Yet AI tools like Perspective are not entirely reliable methods of combatting trolls. One study conducted by the University of Washington, Seattle, demonstrated that with a few typos, trolls can circumnavigate the software. The authors typed offensive sentences into Perspective’s demonstration model, such as “if they voted for Hilary they are idiots” (which came back as 90% toxic) and “anyone who voted for Trump is a moron” (80% toxic). By merely adding some typos, the authors were able to dramatically alter the toxicity score: “if they voted for Hilary they are id.iots” (which scored 12%) and “anyone who voted for Trump is a mo.ron” (13%).
The Troll-Free Solution
It would be unfeasible to expect huge websites to manually sift through messages, weeding out offensive comments. Still, human intervention looks still to be the most effective way of preventing online harassment. That’s why at Innodirect, we employ individual moderators who validate each message before it’s published on our platform, ensuring contributions are constructive and productive. After all, the purpose of our anonymity policy is to provide our users with safe space that’s conducive to creativity, collaboration and co-creation. And thisis how we’re able to grant our users what other platforms are struggling to offer: “troll-free” anonymity.
Keen to see what an anonymous, troll-free platform can do for your business discussions? Contact us for more information.
By Kirsten Sokolovski and the Innodirect Team