I wasn’t surprised to read an article on the BBC news website explaining that UK banks (and more importantly their customers) are likely to experience more “software glitches” in 2013. The likelihood of problems increases as systems age, experienced technicians leave through free-will or redundancy and corners are cut to meet project deadlines.
I was surprised however, to read the term “technical debt” in the mainstream press. Normally this term is consigned to the “geek world”. Technical debt builds up in computer systems for a variety of reasons, as outlined in the article.
- Increased complexity of systems
This is exacerbated by mergers and demergers which force IT systems, never designed to co-exist to be “made compatible” with each other”. Rather than rewriting code, or migrating to a new shared platform designed specifically for the purpose, a sticking plaster approach is taken. This works in the short term but increases system complexity (and risk) in the long term.
- Under-investment and a lack of modernisation
Rather than invest in upgrades, the “if it isn’t broke, don’t fix it” mentality is allowed to predominate. This is sensible in the short term but builds up problems for later.
It’s harder to modify somebody else’s code than changing your own. As well as this, after outsourcing or off-shoring becomes entrenched in your organisation your in-house teams may lack the skills to do this work. This leaves you more reliant on the external contractor than would be desirable.
Technical debt, like other debts needs to be paid in the end and unfortunately UK banks are finding this out the hard way. The well-documented, recent problems with Faster Payments, NatWest batch jobs and Knight Capital’s trading errors will be the tip of the iceberg. There will be a significant number of “near misses” that go unreported.
Testing, why bother?
The article goes on to mention testing and says that “Modern computer systems are so complicated you would need to perform more tests than there are stars in the sky to be 100% sure there were no problems in the system”.
This may be true, but all testing is important and good testing can (and does) flush out problems that otherwise can go on to cripple banking systems. This leads me on to another major risk factor for banks and other sectors.
Who is doing your testing?
School children aren’t allowed to mark their own homework, so why do you allow your IT project teams to do it? Testing best practice says that the test team should be separate from the development team. This helps to prevent problems from being brushed under the carpet by a development team that is desperate to get their code deployed so that they can move onto the next project or invoice the customer.
Agile development methods have tended to merge testing and development teams (for good reasons). Testing becomes an inherent part of the software development work, reducing the likelihood of big problems at the end of a project.
Having said that, in my opinion; regardless of your development techniques it is vital to get at the very least an independent review of your tests. Ideally a separate team should complete your testing. Otherwise you can’t even start to quantify the technical debt building up within your systems.