by Fenwick McKelvey and Elizabeth Dubois (with thanks to John Gray at MentionMapp)
We found out about this Reddit thread through a mention on Twitter. We are both interested in the changing role of digital campaigning. You can read about our work, especially on bots, in the Oxford Internet Institute Report on Global Computational Propaganda.
Given the interests of the users in that thread, we thought there might be a public benefit in walking through some of our own methods to investigate social media bots. While this isn’t an exact science, we have learned a few things about what we refer to as social media manipulation, and we hope this post helps to share our knowledge.
Four important caveats:
- We believe trolling and bots are cross-partisan issues. Without more evidence, we do not believe that any particular political side is more inclined to use bots or media manipulation than any other in Canada. We’re looking into this matter because it’s the first time someone from the public has asked us to. (We would also like to acknowledge that we could be getting played here too—giving too much attention to a pretty ineffective political tactic—but research is lonely enough as is without total suspicion).
- Twitter is a marginal social media platform in Canada. Discovering strange campaign practices on Twitter should, we hope, encourage further investigations of more popular platforms, especially Facebook and Instagram. These platforms have been notoriously difficult for scholars to study.
- Research ethics limit us from investigating beyond these public tweets. We have not and cannot interview anyone at this time. We cannot know for sure who is behind this or why. But we still see value in talking through our Twitter analysis.
- We can’t guarantee we will be able to respond to every request in the future, but we hope this helps. Now to the research.
In reviewing the suspected bots and other data, we found a few suspicious accounts promoting the #crookedchristine hashtag. These accounts, we believe, may have been trying to amplify the hashtag. The tactics were not sophisticated, especially (as we discuss below) because of the low uptake of the #crookedchristine campaign. There were only 488 unique tweets containing the hashtag #crookedchristine (pretty minimal). In fact, #crookedchristine only appears five times in the 19,627 tweet sample of #PCPO or #PCPLdr that we collected from March 2 to 12, 2018.
We think these amplifiers could have been:
- Campaign volunteers, activists, average citizens, and/or trolls affiliated or not with a political campaign or party;
- Campaign staff creating and maintaining multiple accounts or what has been called “astroturfing,” a play on fake grassroots support;
- Or bots.
We also want to know who might be coordinating this. These are the options we have considered:
- Everyday citizens engaged in politics (which we normally like in a democracy);
- Astroturfed amplification by a party or candidate’s staff to boost their presence online (a campaign saying “hey, don’t we look great”);
- Hostile negative campaigning by a party or candidate’s staff (a campaign saying, “hey don’t those guys look bad”);
- Kremlin-backed foreign interference (we have no evidence of this and think it is unlikely based on past work).
So, why do we think this?
We can learn a lot through digital methods to study online campaigning. To begin, do we know these accounts are bots? As a first step, we checked the 28 suspected bots using Botometer, a project to detect bots led by the Observatory on Social Media.
The results are pretty inconclusive. The average score for our 28 accounts was 0.42 (ranging from 0.2 to 0.64). This score, the English-specific score to be precise, refers to the likelihood that an account is a bot based on a few metrics, such as sentiment and the friend network. None of the accounts have high scores, so we are not confident that these are bot accounts. Botometer requires lots of data to make its judgments, so the accounts’ newness could be a factor.
Inconclusive results do not mean that we can’t use other methods to find bots. In fact, Botometer performs particularly poorly with new accounts since there is less data available. We also found this list to be a helpful guide to identifying bots. The username and copied account images are often tells that you’re dealing with a fake Twitter account. In the past, we have also used frequency (more than 50 tweets per day) to find suspicious or bot accounts, but these accounts tweeted so little we did not use that metric.
Using these methods, a few accounts look particularly suspicious. We found more than these four, but we are confident there is something strange in these accounts.
|Time & Date Joined
|4:42 pm 3 March 2018
|4:40 pm 3 March 2018
|4:39 pm 3 March 2018
|4:44 pm 3 March 2018
Research from our colleague John Gray at MentionMapp further confirms our suspicions. He noticed that all these accounts were registered at the same time. These four profiles were created the same day within a few minute of each other, and three had avatar pictures easy to find on the Internet, including the now infamous MikeHockey1234. They are now all deleted too.
Did these accounts behave strangely? Using our sample, we extracted the number of tweets that mentioned a few hashtags circulating during the leadership campaign.
@MikeHockey1234 @DavidHo9678 @EmmaPCParty @Alyssa87801952
Tweets about #CrookedChristine 40 49 49 44
% of Tweets about #CrookedChristine 57.14% 69.01% 53.26% 57.14%
Tweets about Christine Elliot 59 62 69 58
% of Tweets about Christine Elliot 84.28% 87.32% 75.00% 75.32%
Tweets about fordnation 19 28 26 28
% of Tweets about fordnation 27.14% 39.44% 28.26% 36.36%
Total Tweets 70 71 92 77
Though unsuccessful, the accounts here attempted to amplify certain messages, potentially trying to influence what’s trending or popular online. Though we couldn’t track who retweeted who, we did notice some evidence that these accounts retweeted each other. While little more than a guess, we would be surprised if this activity was guided by foreign interference given its lack of sophistication. The Kremlin-backed Internet Research Agency seems to be more adept. It’s much more reminiscent of Pierre Poutine and the robocalling scandal.
Regardless of whether these are or are not bots, we suspect these accounts are engaged in what is called media manipulation. The term refers to “the spread of false or misleading information” that influences the news agenda or social media analytics, according to Data + Society. The term has also been called tactical media where “journalists/artists/activists take seriously Marshall McLuhan’s insight that the medium is the message and have turned, instead, to manipulation of information production, processing and delivery seriously,” according to communications policy scholar Sandra Braman. Tactical media and media manipulation can be both good and bad; both are a fact of political campaigning today. We are interested in exposing the tactics of tactical media, because it has become important and influential to today’s new cycle and because, at its worst, media manipulation erodes trust in the political system and public opinion.
Our chief concern is that political operatives in Canada would launch or support a campaign like this, especially as we are beginning to understand the adverse effects of political polarization. This media manipulation could amplify political division in what looks to be a very contentious election. As we have called for in the past, we would ask that all parties and candidates in the Ontario election abide by a code of conduct and report the use of social media bots. At the very least, all candidates in the Ontario election should publicly state their relationship to this campaign.