Nationwide prepares for GDPR rules with centralised data management

From 2018, European legislation will impose hefty fines on banks that do not protect customer data. Nationwide Building Society (offices pictured) has centralised its data management to minimise exposure to breaches, says Richard Jordan, Test Service Practice manager.

In May 2018, the European Union’s General Data Protection Regulation (GDPR) will come into force, mandating that all data used within firms, including data used in testing, is anonymised and secure. The Nationwide Building Society has centralised its data management capabilities to meet the regulatory challenge, said Richard Jordan, testing services practice manager.

As long ago as 2010 Nationwide was focusing its efforts on consumer data protection through the creation of a Testing Services Practice; a centralised group tasked with establishing best practice in testing and with implementing data management capabilities across the firm in a federated, “hub-and-spoke”, model. The group now holds a single version of all of its data which it then delivers as necessary to development and testing teams. As part of the delivery process, the group is able to anonymise the data so that it complies with the GDPR.

At the time, the group was created to meet the UK’s Data Protection Act, a predecessor of the more stringent GDPR, and it is this group that is responsible for ensuring that Nationwide has the technical capability to make sure that all test data is compliant with data protection legislation.

Jordan said that getting those systems in place to meet regulator expectations is crucial. Being in breach of the GDPR, for example by getting data that should be secure stolen, can result in hefty fines of up to two million Euros, or up to 4% of annual worldwide turnover. The GDPR rules also require all firms have find their data compromised to report the breach to a relevant authority, an unappealing prospect for banks which put a premium on the trust that is associated with their brand.

“In general, the GDPR pushes you to use as little data as possible. We have a single version of the truth – one complete data set – rather than having 20 copies throughout our core systems. We then deliver that Data as a Service to people who need it in testing,” said Jordan.

According to Jordan, the first step towards complying is having a thorough understanding of where existing data sits in the firm. Once that is in place it is easier to build the technical capabilities required to manage it.

The Testing Service Practice uses CA Technologies Test Data Manager tool to deliver masked, subsetted, and synthetic test data to test teams that need it. The Data-as- Service approach means that test teams only receive the data they need, rather than having having access to unnecessarily large data sets that increase the chances of a breach.

Jordan also praises service virtualization, a technology that simulates dependencies in apps, for its ability to bypass the need to use sensitive data by creating entirely new, simulated data.

According to Jordan, Nationwide has tackled the 80% of the problems around data protection. The remaining 20% centres around what he calls sensitive data combinations: “When you combine certain sets of masked data with others, you can unintentionally open yourself up to revealing live data. We need to figure out which combinations are risky and take the necessary steps to eliminate this risk,” said Jordan.

But in general banks are behind in their preparation for the GDPR, and while penalties and loss of trust make powerful arguments for the importance of data security, it remains to be seen whether they are enough to motivate investment in adequate data management capabilities in the financial sector. Huw Price, vice president and global quality assurance strategist at DevOps tooling specialist CA Technologies, believes that currently the majority of banks have some way to go before being compliant with the GDPR.

“Under the GDPR, a bank should be able to pinpoint the occurrence of any one piece of data on its system. For example, that there are 50 copies of Huw Price in development, one in this spreadsheet, one on this desktop, and so on,” said Price.

The reality, according to Price, is that most banks are still far away from being able to do this. Instead they are taking a wait-and-see approach, hoping that if no one is fully compliant then regulators will have to take a softer approach.

That is not to say that banks are starting at zero. They acknowledge that they cannot use production data in test environments, and have taken steps to mask it by altering certain fields, for example, said Price, “But whether it is masked enough it is debatable. If you have multiple sets of masked data you can cross reference them to arrive at the original data.”

Jes Breslaw, EMEA marketing director at Delphix, the the US-based Data-as-a-Service platform, agrees with Price’s assessment. According to him most banks are just scraping by with workarounds, such as using old or subsetted (the process of using a small but representative sample of a dataset) sets of data in testing. Because of poor data management practices banks are forced to run the risk of letting bugs into production because applications were not tested using adequate data, or of having a breach because production data was used in testing.

But data management does not just have to be a cost. Centralising data management and using a platform that can deliver lightweight ‘virtual’ [data that does not take up as much space as normal data and that can rapidly transferred] data can speed up development time and improve testing and help enable DevOps, said Breslaw.

“Moving to centralised data delivery can speed up testing by removing one of the bottlenecks, which is provisioning those test environments. It can takes minutes rather than weeks to transfer large data-sets,” said Breslaw.

Nationwide’s Richard Jordan said that the centralisation effort is one step along the path to DevOps. The next step for his team is to start thinking about the impact of data on app design. “We are starting to focus on shifting left and DevOps, and part of that is asking those questions around data at the architecture and design stages. If you do not design your apps with data in mind, then you will feel the pain later. And that requires conversations between my team and the design and architecture teams.”

Tweet about this on TwitterEmail this to someoneShare on LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>