In a recent article with the headline “Gone in 24 hours,” we wrote about how and why a single day is more that enough time for cybercriminals to set up and pull off an attack, even if it involves buying a fake online domain, setting up a bogus website at that domain, and spamming out thousands or even millions of users to lure them in.
In a typical phishing attack, the goal is to trick you into thinking you are logging into an legitimate site so that you enter your username, password, and perhaps your multi-factor authentication (MFA) code into a look-alike page.
For this purpose, the criminals generally set up their own server, so they don’t need accounts on anyone else’s service in order to interact with you to steal your data.
Their ultimate goal is to get into and take over an existing account of yours, perhaps so they can sell on access to your computer or your company’s network to other criminals, for example to install data stealing malware or to pull off a ransomware attack.
As you can imagine, passwords for stolen social media accounts aren’t as valuable to cybercriminals as access codes for remote access to corporate networks, or to banking apps and cryptocurrency wallets.
But almost all purloined accounts have resale value in the cyber-underworld of so-called Initial Access Brokers, or IABs.
Notably, access to your social media or instant messaging accounts can give scammers a direct line into your own closed groups of friends and family, for example to promote investment scams, conduct cryptocurrency fraud, or trick them into installing rogue software.
A direct message that actually comes from your account, even if you didn’t send it yourself, is much more likely to be believed and acted upon by someone who knows and trusts you than a spam email message from someone they’ve never heard of before.
And some cybercriminals go after established social media accounts simply because they have sufficient account history to evade informal detection as obvious sockpuppets, catphish, stalkers, or other purveyors of fake news and scams.
Established accounts not only have a creation date more than a few days old, but generally also have a genuine-looking profile picture, a realistic posting history, a reasonable number of followers, and so on.
Sockpuppet, if you aren’t familiar with the term, is a metaphor (think of how the Muppets work) for an account that is deliberately operated under a fake name, typically to make a scammer’s posts look more believable and more interesting (or controversial, or important) than they really are. Fake reviews of low-quality or outright fraudulent products are an obvious example. And a catphish is a very particular form of bogus account – one that passes itself off as belonging to a likable and trustworthy individual, for example by lying about gender, age, appearance, education, location and more. Typically, catphish accounts are used to lure victims into one-to-one online relationships that end in financial fraud, online abuse, physical stalking, or worse.
But some cyber-scammers and criminals are more interested in brand new accounts in huge numbers, notably on social networks, that they can operate and control directly, right from the outset.
Many of these accounts may never reach a sufficient level of longevity and believability to make credible sockpuppets or catphish, but:
This, of course, raises the question, “But what’s the chance of any cybercriminal gang accumulating a million fake accounts this month just to land up with 1000 accounts left at the end?”
Answering that question is surprisingly tricky, not least because different people, and different services, define fake accounts and bogus users in different ways.
Some services are happy with pseudonyms, or even with deliberately fake names created for humor or parody, as long as you provide a working email address; some may consider you “identified” as long as you supply an active mobile phone number; others may insist on real names and a stronger verification of identity and location up front.
But the scale of fake account creation is certainly implied, if not proved, by this data from statistics-gathering outfit Statista.
We already published the company’s LinkedIn fake account data in the “Gone in 24 Hours” article we mentioned above:
Perhaps the most intriguing thing about this graph is the legend, describing the black squares in an upbeat way as accounts that were removed “proactively,” even though they were created and used by their creators for an unspecified time, and then removed reactively, albeit before any human complained.
And, as we mentioned before:
Figures for LinkedIn’s true growth, in other words the increase in non-fraudulent user accounts whether active or not, suggest that about 60 million to 70 million new accounts are created every year.
There would therefore seem to be about at least twice that many attempts to create fake accounts, of which close to 40 million succeed at least for a time. (And that, of course, is just the fake accounts that are spotted and removed.)
For Facebook, the number of fake accounts recorded in Statista’s data was significantly higher than on LinkedIn, and these numbers explicitly include personal accounts used for business or “non-human” purposes:
Again, these figures don’t tell us how many fake accounts went unreported or undetected, or how long those known-fake accounts lasted before being taken down.
Lastly, here is Statista data for TikTok, but this time we have chosen numbers that illuminate a different angle on the problem, showing the number of fake endorsements that were taken down, regardless of whether the accounts behind those endorsements were removed or allowed to continue operating:
Those 2023 fake interaction figures, representing online endorsements that happened but were later deemed dishonest and removed, add up to nearly 5 billion.
There are approximately 10π million seconds per year (that’s a handy shortcut to remember, accurate to within 0.5%), so the rate of TikTok “endorsement chicanery” works out at more than 150 bogus likes and follows every second, once again excluding the ones that weren’t noticed, that no one reported, or that were reported but ultimately not removed.
Clearly, automated tools alone are woefully inadequate in preventing fraudulent social media behavior, though they can clearly help in mopping up the damage after the fact.
According to the LinkedIn figures above, 88.8 million fake accounts were stopped during registration in 2023, but 32.2 million were only detected after activation, meaning that Microsoft’s automated tools let through more than one in every four unwanted account registrations.
The value of fake accounts, both as original promoters of fraudulent schemes, and as sockpuppets used to provide an aura of legitimacy for those frauds, is clear from the sort of online scams that you may already have encountered yourself (whether you were sucked into them or not), or have read about in scam warnings.
Examples include:
With thousands, millions, even billions of fake accounts vying to defraud you on social networks, automated tools and technological solutions simply aren’t enough, though they can help significantly.
So, here are four simple tips:
Why not ask how SolCyber can help you do cybersecurity in the most human-friendly way? Don’t get stuck behind an ever-expanding convoy of security tools that leave you at the whim of policies and procedures that are dictated by the tools, even though they don’t suit your IT team, your colleagues, or your customers!
Paul Ducklin is a respected expert with more than 30 years of experience as a programmer, reverser, researcher and educator in the cybersecurity industry. Duck, as he is known, is also a globally respected writer, presenter and podcaster with an unmatched knack for explaining even the most complex technical issues in plain English. Read, learn, enjoy!
Featured image of masked person by Anna Deli via Unsplash.
By subscribing you agree to our Privacy Policy and provide consent to receive updates from our company.