June 2012 will be remembered as ‘menses horribilis’ by the UK’s NatWest bank.

A computer failure has caused millions of customers to lose access to their accounts for a number of days.  The IT-based incident is of such a huge scale that, a week after the incident began, they still don’t know how many of their 17.5m customers are affected.   And it appears – at time of press – it was caused by one inexperienced technologist who made an error.

News reports tell the human interest stories, of course, including the people who’s house purchases and moves are on hold and single mothers who’ve had to work out how to get their money out to feed their children.

The messages on the holding page to NatWest customers reads as below, but what’s the history of this bank, what caused the problem, and what lessons should we take away from the story so far?

The NatWest was formed in 1968 when the National Provincial and the Westminster banks merged: they’ve been in existence in one form or another since 1655.  It created the Access credit card in 1972.  TIn March 2000, NatWest was bought by the RBS and is now part of the one of the largest banking groups in the world.

So, with that kind of credibility, how did this business disaster happen?

It appears the incident emerged from an error during the upgrade of the CA-7 software that banks use to manage transactions.  It’s alleged that a schedule of transactions was not put on hold while this upgrade occurred, but were instead, accidentally cancelled during the process and, as the software change was made to the primary and backup systems, there was no easy fix to restore the schedule. So even though money has been paid into accounts, and bills have been paid, they didn’t visibly show up in bank account.  In turn, this meant customers couldn’t make payments because it didn’t appear to them that they had enough money in their account.  Customers were advised they can go to a branch for emergency cash: their banks extended opening hours and, for the first time ever, opened on Sunday.  Any fees incurred as a result of the incident will be refunded and the bank has also promised to work with credit reference agencies to ensure credit records are not blighted.

As business continuity planners, we’ve been thinking about the key lessons we’ve been reminded of so far:

  • One person can do a lot of damage
  • During system change, the ability to restore information to a pre-change state should be established before change is made
  • The risk of applying changes to primary and back up systems at the same time needs should always be assessed
Interestingly, nobody seems to be talking about the way NatWest is handling it’s PR response so, going forward, we’ll aim to summarise what they did as – especially in the scale of this crisis – they’ve obviously done a very good job at handling the media and customers.

 

Was this useful?
Share it!
Subscribe - weekly news and a free course!

Related Articles

Share

About Author

admin

Leave a Reply

Your email address will not be published. Required fields are marked *

*

48,505 Spam Comments Blocked so far by Spam Free Wordpress

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

UA-25954186-1
Loading...
Get free weekly newsletter:
No spam guarantee.
UA-26045347-1