Tags: disk encryption, human behaviour, risk, security, security risk management
In a recent blog post, Bruce Schneier highlighted how a commercially available and low-cost (around £200) forensics tool is capable of cracking passwords for common commercial whole disk encryption products.
As I mentioned in a previous post, use of PGP Desktop to encrypt all laptop disks is compulsory at IBM and is enforced through our end-user computing standards.
The default power management configuration for laptops often just suspends the laptop when the lid is closed or when ‘sleep’ button is pressed. unless the laptop user selects ‘hibernate’ the disk drives are not encrypted. standards dictate that laptop configuration should be changed to hibernate in these circumstances, but how many users actually make the necessary changes?
The comprehensive help documents provided by IBM for configuring the whole disk encryption software step the user through making a ‘rescue disk’ to allow recovery in the event of a lost encryption password. So, how many users take any precautions to protect that?
Going back to the potential attack against whole disk encryption, it relies on the attacker being able to recover the encryption key from memory dumps or hibernation files, after the disk has been decrypted. Of course, if the laptop is always left safe (ie. powered down or at least hibernating) then that attack vector isn’t available. However, how many users leave their laptop unattended and logged in when they believe the environment is ‘safe’? And, how many leave their laptop unattended before the hibernation process has completed?
The common thread through all of this is that if users are careless, they can inadvertently cancel out any benefits from technical countermeasures. It’s simple enough to describe the exact behaviour that will prevent this. In Public sector security, we call this Security Operating Procedures, or SyOPs for short.
It’s usual to define the IT security risk management process as starting with risk assessment to select the right security controls, followed by incident management to deal with residual risk, invoking crisis management and BCP when required, to recover from the most severe incidents. I strongly believe that SyOP production and security awareness training for end users must form part of the risk management process and must be in place before a service is activated to ensure that the security controls operate as designed and to defend against the sort of attack described here.
As I said in the title, users are the one part of the system that can’t be patched to remove vulnerabilities. It’s vitally important to explain the importance of what we ask them to do and then to reinforce that through adherence to mandatory written instructions, in order to establish the ‘habit’.
Tags: human behaviour, Twitter
Not my normal security-related subject matter, but I had to pull together some highlights (wrong word?) of the appalling events in London over the past few days.The sequence below, taken from Twitter and Flickr and assembled in Storify (http://www.storify.com), show clearly that the vast majority of people in the UK are sickened by the mindless violence and sheer greed of the criminals who did this. The story also shows (to me at least) that when it comes down to it, the people of the UK, and particularly Londoners, will always rise above attempts to terrorise them and just get on with sorting things out.
Something we can all do to help. Publish the banner on your website or your blog or retweet the post. Let people know, so they can turn out to help with getting things back to normal.
For me, this picture sums up the violence of the whole thing. This morning’s television news showed footage of a 150 year old family run furniture store ablaze. Why? What did that achieve?
But, as bad as things get, people act with kindness and show their appreciation to the police..
And then this morning, I can only echo Professor Brian Cox on Twitter (above). it really does restore your faith in human nature.
People turned out in droves, responding to a spontaneous campaign to clean up the devastation left by the rioters.
|#riotcleanup pictures on PicFog
Check out this site for more pictures of the clean up operation around London.
Now something else we can all do to help. Look at the pictures from the Met Police. If you know any of these clowns, tell the police. They need to be stopped before someone gets seriously hurt.
Tags: CISO, disclosure, human behaviour, Information security
… and shame the Devil, as I was often told as a child. Sound advice you’d think, but in the world of IT Security such honesty could cost you your job. I was alerted on Twitter by Kai Wittenburg to the story of Pennsylvania’s CISO Robert Maley. According to the story on Computerworld’s web site, Maley was fired by his employer, apparently after commenting on a security incident during the RSA show. The reason given for his dismissal ws that he failed to get the proper approvals before making his comments. The incident in question appears to have been a vulnerability in a scheduling system at the Department of Transport. The Department denies that any hacking or breach was involved in the incident, but details have been handed over to the State Police for investigation. This furore is taking place against a backdrop of cuts of 38% in IT security budgets and 40% in staffing.
Chances are, Maley’s employer does insist on rigid prior approval for this sort of thing. It’s all part of the culture of secrecy around security incidents that’s endemic in large organisations. The immediate effect is to make it more difficult for all of us to get budgets approved for security programmes. Faced with yet another capital expenditure request for an IT security programme, the CEO will say “..but , if this threat is real, why don’t I ever read about it in the Press?” Answer: because far too many organisations follow the lead of the Commonwealth of Pennsylvania and deny everything.
And there’s another consequence of not discussing these incidents – we don’t learn from them. In his book “Managing the Human Factor in Information Security“, David Lacey describes how the aviation industry has systematically and ruthlessly pursued safety through a combination of mandatory incident reporting and thorough investigation of “near misses”. Any major incident is the result of a series of cascading failures. If any one element holds up under pressure, then the disaster is averted. However, there are still a whole load of individual failures to be investigated and lessons to be learned. Next time, you might not be so lucky.
As our World becomes ever more dependent upon on-line systems, so the impact of security incidents will become ever greater. Unless we allow – even encourage – IT security professionals to follow Maley’s example and openly discuss these incidents, how can we ever hope to improve?