Each new day sees the emergence of new malware, hackers and data breaches, with most successful attacks owing to basic human errors. Despite this, people continue to ignore security warnings, something researchers have suggested is all down to our brains.
From opening phishing emails and using weak passwords to running outdated software, people have long been compromising their own – or their employers’ - security. Most of this is accidental of course, a lost laptop on a train or clicking on a link from a ‘trusted’ person, although malicious insiders also exist.
All of this has led to a recognition that people are ’90 percent of the problem’, as Verizon put it its DBIR report released this week. Cyber-criminals are all too willing to exploit this flaw, with some reports claiming that 95 percent of information security incidents are down to human error.
There have been numerous attempts to rectify this; businesses have simplified security their training awareness schemes, while governments have introduced various campaigns and online courses. Media headlines have made cybersecurity a topic of conversation, while technology companies have tried to make security easier and cheaper.
But with the same old security failings happening time after time, there have been questions asked on if this is simply down to the human psyche – a state of mind.
Earlier this year, a study from Utah’s Brigham Young University, in collaboration with the University of Pittsburgh and Google, used MRI imaging to show that the visual processing part of the brain stopped analysing security warnings after just the first viewing. Many of these warnings were also flawed, found the researchers, because they merely warned of the problem and not of the potential repercussions if the warning went ignored.
“Users’ habituation to security warnings is pervasive, and is often attributed to users’ carelessness and inattention. However, we demonstrate that habituation is largely obligatory as a result of how the brain processes familiar visual stimuli,” said the researchers.
Instead, they said that warnings of the risks needed to be concise and offer meaningful choices about how end users should proceed.
Technology companies have also got in on the act; Google conducted its own research in late 2013 and, after finding that some 25 million Chrome warnings were ignored 70.2 percent of the time, it begun to remove technical terms and simplify text. Most users, for example, didn’t understand what a SSL certificate was or what it was for.
Andreas Gal, chief technology officer at Mozilla, which is behind the Firefox browser, added when speaking to The Guardian. “Even though we prefer that the user decides things, in some cases, it simply doesn’t make sense. It’s simply impossible to explain something as complex as cryptography to many users,” he says. “You start making specific recommendations or judgements for the user.”
Developers also added illustrations to suggest danger, and began using background colours to suggest different levels of threats.
Users are particularly fatigued with SSL warnings; the same researchers from Brigham Young University are due to present their research at the Association for Computing Machinery's CHI 2015 conference in Seoul, Korea, later this month – where they will reveal that users clicked through one-half of all SSL warnings in less than two seconds.
This research isn’t out of the norm – in its "Crying Wolf: An Empirical Study of SSL Warning Effectiveness” report back in 2009, researchers from Carnegie Mellon university warned that certificate warnings were ineffectual, with the majority of the 409 respondents saying would ignore warnings about SSL – something that could be exploited in a Man-in-the-Middle attack.
Despite this ignorance, the good news is that there is a response; developers are simplifying text and design, enterprises are ‘gamifying’ security education and technology firms are forging ahead with the likes of password managers, two-factor authentication and biometrics – all which makes security easier and more accessible.
Professor Angela Sasse, director of the research institute for the science of cybersecurity at University College London, said last year that security has been too much of a burden for end users.
“People are willing and able to spend a certain amount of time on a non-productive task, like security, but they have a built-in meter [for tolerance],” she said at an academic conference.
Lance Cottrell, chief scientist at Ntrepid and the founder of Anonymizer.com, which predates the The Onion Router (TOR) network, also said in a statement: “Software will always have flaws and humans will always click on things they should not. Security requires that we start designing our systems and networks to be robust even in the face of this kind of inevitable vulnerability.”