Whilst there is increasing recognition that we need to tackle the human aspect of cyber security through training our people, we still see some key mistakes being made in the way many organisations approach this.
Mistakes they can't afford to be making with the oft-quoted statistic that
“85% of cyber attacks start with the human user”
As practitioners, it may help to understand some of these flawed perceptions so we can be prepared to tackle them.
While there are elements of truth in all of them a multifaceted approach is needed, far beyond that which the standard "go to" solution of eLearning can deliver.
Let’s explore these a bit more...
1. It’s just common sense
“It’s just common sense” is one of the most common mistakes being made in the way organisations address the human factor.
In essence, the argument is either that knowing how to act securely is just common sense and is expected behaviour. Or even that changing behaviours is common sense. So, practitioners should just get on with it.
- The former can stem from a lack of realisation that the mental models held by security practitioners are not the same as most corporate staff. What seems like common sense to one might not to another.
- As for the latter. If changing behaviours was easy, we’d all be mega fit, have amazing diets and have given up smoking and drinking...
Change is difficult!
And Human behaviour is the result of the complex interplay between habit, context (social, physical and virtual!), automatic responses and conscious choices. That’s not a counsel of despair, but rather a call for recognition that behaviour change requires a more sophisticated approach.
2. The issue is one of knowledge and understanding
The second mistake is assuming that “The issue is one of knowledge and understanding”; the argument being, that once we’ve told people what to do the problem is solved!
There’s no denying it’s a starting point, people may need education on threats and how to deal with them, but how many organisations have annual mandatory awareness and still have incidents?
Just because we impart awareness does not guarantee understanding. And even with full knowledge and understanding, we cannot guarantee behaviour change. Insecure decisions could be deliberate, but non-malicious, to get the job done. Or they could be accidental, knowledge and understanding forgotten, in the heat of the moment.
Three key challenges here then are:
- increasing motivation - approaches can include making it personal, using reciprocity and social proof. But that might not be enough.
- making it easy - offering simple actionable advice, not deep technical guidance.
- in-context support - helping people at the point of risk, guidance as people go about their work, in the heat of the moment, supporting secure decisions making.
3. It’s just about getting the message across more effectively
The third mistake is the view that “It’s just about getting the message across more effectively”.
This one risks suggesting that we aren’t speaking slowly or loudly enough!
There have been some great efforts to deliver awareness messages in a more engaging way - videos, games, cartoons, puppets; and people do, and will, come away saying they were great and they learned a lot. And they likely did…briefly.
Awareness requires a many-layered approach, so such events are an excellent addition to the armoury.
But engagement is only half the battle.
The issue with the messaging is more fundamental; the messages are out of context and often one-off.
No matter how effective the delivery of a workshop, escape room or hard-hitting video is, 3 or 6 months later (or even back at the desk that afternoon) will they impact behaviour?
To change behaviour delivery has to be more timely, more in context and ongoing throughout the year.
4. People act rationally
It seems logical to hope that people act rationally, that they do what they know to be sensible and logical after critical and rational appraisal of the evidence. So then isn't the job of security practitioners to provide evidence and guidance for secure behaviours?
This thinking, aligned with economic utility theory, that we all seek to maximise gains and minimise losses, is responsible for the usual focus on pains and losses, FUD to encourage secure behaviours.
The problem is that knowledge, and its rational assessment alone, do not drive behaviour. Most insecure behaviours are embedded in context and this, rather than information, is the stronger driver of behaviour.
Whilst some of our behaviours are rational and calculating, some are governed by our automatic system which responds to environmental and social cues in a way that requires very little conscious engagement.
This is why the bad guys tap into fear, desire, urgency, authority, need to belong and social proof to trick us into clicking links. And it’s also why we should use these same psychological tools in the fight to protect our people.
And finally…
5. People act irrationally!
Yes, I know! I have just said “assuming people act rationally” is a mistake!
But assuming people are always acting irrationally when they act insecurely isn’t right either.
Dig deeper into context, and their reasoning may be completely rational. “We had to get this to the client, the only way I could do it was...”.
Sometimes business systems, processes and priorities aren’t well aligned with security, and we put our people in positions where they feel “they have to”, acting quite rationally, work around security measures.
Data is key to helping here. Spotting risky behaviours. Highlighting secure alternatives. Following up where behaviours persist to understand the context.
Conclusion
So, there you have it! 5 common mistakes being made in the way organisations address the human factor in Cyber Security, and some ideas on how you might combat them, when, not if, you encounter them.
The simple truth is that this is a human behaviour challenge and human behaviour is not a simple thing to understand or change!
If you’d like to know more about how RedflagsTM address these challenges, visit thinkcyber.co.uk or get in touch.