Data Governance – It’s not what you know, it’s what you can prove
Data governance has been a source of frustration to banks for a long time now. It’s critical to the production of all external disclosures – both standard regulatory returns and statutory accounting alike. And in principle, it should be no less important to internal MI used to support business decision-making and enterprise risk management. At some point in each of these processes, senior management are asked to avow their confidence that the numbers in front of them constitute a complete and accurate picture of their organisation.
How on earth can they manage this? The raw volumes of data underlying a typical reporting process are staggering – millions of independent pieces of information, with a 100% probability of at least some being wrong. And along the lifecycle between initial trade booking systems and MI reporting tools, there may be dozens of points at which the data is aggregated and adjusted, each adding the possibility of further error.
The end point is a summarised, high-level view for C-level executives – on what basis can they possibly “know” the numbers are right?
Clearly there need to be controls in place at each of the points of aggregation, to test that the quality of data has not suffered on its journey – and hopefully, has improved. There is the universal Finance go-to method – reconciliation. There are also variance analysis, sample testing, “deep dives” and reasonableness checks. Contrary to popular belief, none of these methods actually improves data quality – doing that relies on separate processes which actually remediate data errors. But critically, they produce evidence of things being done in the right way. And for some purposes, that’s as important as the data itself.
It’s therefore disappointing to see that whilst the technology for aggregating data advances over time, the equivalent methods to manage data governance artefacts often remain very primitive. Vaguely-worded signoff emails at the end of convoluted chains; ambiguous working spreadsheets which purport to be reconciliations. If the evidence is captured at all, it’s often in a highly unstructured form which is impossible to aggregate effectively alongside the data it relates to, and is mostly lost by the time numbers are presented to senior management – who are nonetheless expected to attest to the entirety of the data being “complete and accurate”.
It doesn’t have to be that way. Systems and processes can be put in place to capture evidence of data controls, and package them in a form that can travel with the data and aggregate effectively. This isn’t a new idea – our Software division, after all, created BCS Integrity in response to just such a need. There are, however, challenges – both technical (fragmented systems/data, storage costs, performance considerations) and behavioural (attachment to bespoke spreadsheets; resistance to signing off via highly structured systems etc.). A key issue is to achieve a mind-set shift by systems developers from thinking of data governance support as a nice-to-have requirement, to a must-have.
There is, after all, a strong regulatory business case. With the advent of the Senior Managers Regime, executives will now be held legally accountable for prescribed responsibilities that include “production and integrity of the firm’s financial information and its regulatory reporting in respect of its regulated activities”. We have yet to see how this new legislation plays out in practice, and whether SMR, in conjunction with other regimes such as BCBS PERDARR, will see senior managers face direct legal repercussions for poor quality data. But more than ever before, solid data governance can be expected to be a key priority for our clients.
And to quote the 2001 film Training Day “It’s not what you know, it’s what you can prove”.