Journalist Kevin Townsend asked my opinion on a report from de Montfort University (in Leicester, in the UK) offering analysis of what the report calls 'splash pages' of various examples of ransomware, and claiming to show that 'whilst there was a wide variation in the construction of ransomware splash screens, there was a good degree of commonality, particularly in terms of the structure and use of key aspects of social engineering used to elicit payment from the victims.' Kevin was dubious about the value of the research, since 'at this stage, the only social engineering they can do is to persuade the victim to pay up.'
And he has a point (in fact, several points): after all, it's a binary decision that doesn't really call for sophisticated social engineering. Do you value your data enough to pay the ransom, or do you decline to provide comfort to the criminal and refuse to pay up? While a range of social engineering techniques can be seen at work in this context, is it possible to assess how successful they are, or even whether they're particularly relevant?
I agree that the practical application of this research, as represented in the ‘key points’ at the end of the paper, does seem limited, certainly in terms of hard-core security. And I agree that there are no obvious metrics for establishing how ‘successful’ social engineering is in this context. If your data is unavailable, maybe all you really want to know is how to get it back. However, it does raise some interesting questions, both directly and implicitly. Why do (some of) the criminals apparently think the social engineering is important in this context?
Of course, social engineering may play a vital part in persuading a victim to open a malicious executable or website that allows ransomware to get a foothold on the victim's system in the first place, but that isn't the issue that the de Montfort report is looking at.
John Leyden, for The Register, also looked at this topic (but didn't ask my opinion). ☺ He summarised the techniques noted by Dr. Hadlington in the report commissioned by SentinelOne, but didn't really explore the issue of how relevant these techniques might be to the success of the malware.
As I see it, the importance of social engineering in notifications lies mostly in these areas:
- Pressuring the victim into taking the desirable action of paying up more or less immediately, rather than exploring other options. Especially if there’s a risk that grey – or white hat – researchers will come up with a way of recovering data without paying. The criminal doesn't give a victim time to find such help on (for instance) a Bleeping Computer forum, a vendor web site, or No More Ransom.
- Pressuring the victim into paying for recovery of data that are actually easily recovered or not even genuinely compromised.
- Pressuring the victim into paying for recovery of data for which the criminals don’t actually have a recovery mechanism, before some interfering security researcher points out that paying up doesn’t achieve anything.
Ensuring that it’s as easy as possible for the victim to pay up is clearly important to the criminal, but I’m not sure it counts as social engineering as the term is usually employed in security. At any rate, it’s no different in principle to communication processes in legitimate business. But then, you could certainly describe legitimate techniques of advertising pressure and persuasion as social engineering in a broader, more sociological sense.
Perhaps the most interesting issues relate less to the differences in social engineering techniques that the author notes, than to the reasons for those differences. Presumably, some criminals have just thought more carefully about their intended victims and the information those victims need in order to use Bitcoin, for example, while other criminals expend less effort.
But what should we make of those notifications – I wouldn’t describe them all as splash screens, some of them being little more than a semi-literate text file sitting on the desktop waiting to be discovered – that appeal to the ‘authority’ afforded by impersonating a law enforcement agency, for example? What added value does such impersonation add, if the victim is already aware that his or her data are being held to ransom? Is it just that such impersonation has worked successfully in other forms of extortion? (Hadlington does mention ‘imitation’ in his report.)
Is it an attempt to make victims feel that they deserve to be extortion victims? (Crooks do tend to stereotype their victims, and may believe that more of those victims themselves indulge in criminal behaviour than is necessarily the case.) Is it a way of transferring guilt from the perp to the victim? Is it a means of self-justification, like a ransomware gang claiming to be donating ransoms to charity?
And when a notification uses humour, is that really in the hope of being liked, as Hadlington suggests (‘yes, he encrypted my family photographs, but at least he was nice about it…’) or is it actually a way of conveying to the victims that their pain doesn’t matter?
As someone with a background in social sciences, I find these questions rather interesting, but from an academic point of view, without subjective data to draw on which aren’t present in this study, they’re just conjecture, which is no doubt why Hadlington doesn’t explore them. On the other hand, they might give us some ideas on how to help (or at least educate) victims. Teaching remorse to criminals (let alone teaching them not to offend in the first place) might be a tougher job …
Kevin's article can be found here: Researcher Analyzes Psychology of Ransomware Splash Screens.