In cyber security, we come up with exquisite solutions to incredibly hard problems, manage risks that would make most people’s toes curl and get computers to do things nobody thought was possible (or in some cases desirable). But we also continue to place ridiculous demands on users (deep breath: must not mention clicking links in emails or password policies), implicitly expect arbitrarily complex implementations of technology to be perfect and vulnerability-free in the long term, and then berate those who build the stuff we use when they fail to properly defend themselves from everything a hostile state can throw at them.
I’ve always found this cognitive dissonance interesting, but I haven’t found a way to reliably and easily work out when we’re asking for daft things. The nearest I’ve got is to try to determine where the security burden lies and whether it’s reasonable to ask that party to shoulder it.
Is it reasonable to ask a ten-person software company to defend themselves from the Russians?
Nope.
Is it reasonable to ask a critical infrastructure company to not have their management systems connected to the internet with a password of ‘Super5ecre7P@ssw0rd!’?
Bloody right it is!
The trick for making cyber security scalable in the long term is to get the various security burdens in the right place and incentivise the entities that need to manage those burdens properly. Sometimes the obvious person to manage a burden won’t be the optimal one, so we may want to shift things around so they scale better. And let’s be honest, we do suffer a lot from groupthink in cyber security. So, here’s my plea. The cyber security community should:
- talk to people who aren’t like us and actually listen to them
- stop blaming people who don’t have our l33t skillz when something goes wrong
- build stuff that works for most people, most of the time (rather than the easy or shiny thing)
- put ourselves in the shoes of our users and ask if we’re really being sensible in our expectations
We haven’t got that right yet….