Social media engagement pods: Uncovering the new threat to Algorithms
- Milcah Kerubo |
- December 18, 2020 |
- Algorithms,
- Artificial Intelligence
The image is from Kindpng
Introduction
Over 3.6 billion people are involved in social media usage with user generated content ranking as the most popular digital activity. User generated content is any content that is created, published, or submitted by users of a brand. This content is created by people rather than brands and usually comes in form of images, videos, social media posts, reviews or testimonials. This growth can be linked to the rise of platforms such as Facebook, Instagram, Snapchat, Twitter, YouTube and others that facilitate interactions between like-minded peoples. With the rise of user engagement on these platforms, social media companies now utilize algorithms to enhance user experience. An algorithm is a mathematical set of rules that specify how data behaves. In social media platforms, algorithms are used to prioritize content placement, advertisements and social media accounts on users’ feed by ranking search results.
However, ranking of search results varies from platform to platform. For instance, Facebook prioritizes the viewership of local, familial posts and posts from friends over business posts; Instagram viewership is based on engagement and content popularity while Twitter focuses on the time and date the content was posted. This approach of prioritization of search results has come with far reaching consequences. On the positive side, digital marketing companies have been able to understand their customer’s needs which has boosted the efficacy of their product placement. On the negative side, which is the focus of this paper, the prioritization of search results creates a deceptive and manipulative culture of boosting the level of exposure in follower feeds. Online exposure enables influencers to make money from sponsored brands by putting high valuation based on their increasing online reach. As a result, there has been unhealthy competition among influencers to monetize their social media accounts. This monetization can be achieved using manipulative means such as engagement pods.
Engagement pods are online groups that trade likes and comments on social media posts to boost their visibility. These groups are publicly available through messaging groups created for the sole purpose of boosting posts. This exchange is only meaningful to the extent that members of such groups interact with other posts for engagement on theirs. This type of behavior has commonly been classified as reciprocity abuse because it creates illegal means of circumventing algorithms by coordinating likes and comments on millions of social media posts. This has led to social media accounts that artificially inflate post popularity to gain more reach on their follower feeds. Engagement pods are using artificial likes and comments to deceive social media algorithms into promoting their pages.
This paper aims to highlight how engagement pods work and some of the cybersecurity threats, such as spamming and hacking, that emerge from their use. It will outline the difficulty in regulating engagement pods and detail how social media companies can utilize automated tools that collect data from their platforms to detect such illegal activity.
Cybersecurity issues and the difficulty in regulating engagement pods
The need to regulate social media engagement pods emerges from the risks that users are exposed to on these platforms. First, pods can be filled with spam material and unsafe content increasing the likelihood of being hacked. Spamming entails the sending out of unsolicited messages in bulk to recipients who do not want them. Such action can drown out the messages that recipients do want and can also lead to impersonation of well-known brands using a landing page[1] similar to a trusted site[2]. Spamming also gives hackers access to potential victims by connecting victims to a server controlled by the attacker using a link. The use of pods stifles meaningful and genuine interactions as they facilitate the solicitation or trade of fake and misleading user reviews and ratings – the fraud and deception arising from this must also be regulated.
Despite these risks, regulation of this behavior is considered a grey area – mainly because detection of these groups has proven difficult. In most cases, there is a very thin line between real and fake engagements. Real or fake engagements can be differentiated using a range of metrics that include: shares or retweets, likes, comments, followers and audience growth, click-throughs and mentions. Essentially, before classifying user engagement as real or fake engagements one can observe the inflated sponsored rates, the inflated likes and comments or wrong target audience in their accounts. In many ways, fake engagements often mirrors the interaction of friends engaging in each other’s content because of people circumventing the design of algorithms. Engagement pods are silent security threats to the algorithmic system that can only be detected by carefully observing patterns of engagement using other automated tools.
One way of identifying engagement pods is by utilizing security solutions such as the Application Programming Interface (API). API’s are software intermediaries that allow two applications to communicate to each other, for instance, when software wants to access data. This software can link graph API’s of a social media platform to a database which can assist in analyzing data from explicit actions of a user that shows how they grow their account. Platform owners can use this data to detect an engagement pod’s activity by identifying the repetitive and generic language used on users’ feed. This ensures that even as algorithmic systems are utilized, there are continuous efforts by companies to track the fairness, accountability and transparency of a user’s actions. Furthermore, this will ensure that companies or individuals carrying out digital marketing are not entering into contractual obligations with fraudulent accounts.
Conclusion
Automated tools like algorithms are prone to cyberattacks such as spamming and hacking leading to more harm than good. Yet, engagement pods are still not seen as a major threat with little being done around developing policy and tools addressing these issues. However, the more we focus on prioritization practices and ignore engagement pods, cybersecurity issues will continue to be amplified. Therefore, social media companies should utilize the benefit of access to data to tighten their policies and develop tools that regulate the activity of engagement pods.
[1] A homepage; the first page you are directed to after clicking a link.
[2] Websites that are safe and will not damage your computer.