Online discussion platforms have become an important part of social life. However, they may be abused, for instance, to deceive and create an illusion of public consensus or save people from being banned, or to vandalize content on platforms like Wikipedia using “sockpuppets”--sockpuppets are multiple accounts owned by a single user on a single online discussion platform.
The authors use anonymized data from nine discussion platforms; they look at about two million discussions and about 62 million posts, and identify 3656 sockpuppet accounts of 1623 puppetmasters (the owners of such accounts). They study the sockpuppets using linguistic trade analysis, activity and interaction characteristics, and communication network structure features. They classify sockpuppet pairs as pretenders or non-pretenders (depending on the effort to maintain the perception of different accounts) and supporters versus dissenters (based on behavior).
Using the above dataset, they derive options to answer two questions: Is an account a sockpuppet, and are two sockpuppet accounts pairs? First, some of the results seem directly linked to the objective of the puppetmaster, where one endorses the other or otherwise adds to the credibility of the first one. Second, linguistic analysis as a means to identify sockpuppets shows differing characteristics and similarities to other deceptive writings.
This large-scale examination shows that sockpuppets are a real threat. In the cyberworld, they are harder to expose than their equivalent in real life. In a world of fake news and deepfakes, research that helps to detect cheaters in online discussion fora is welcome. The data in this paper can serve as a wake-up call for those who minimize the risk, as well as for those who want a better understanding of the prevalence of the issue. It may encourage researchers to further investigate this topic.