As enterprises navigate an increasingly varied and complex regulatory landscape, a common thread is emerging from major regulations like DORA (Digital Operational Resilience Act), the EU AI Act, and the NIST AI Risk Management Framework. Despite their different countries of origin, focus areas, and objectives, they all point to the same underlying requirement if organisations hope to achieve successful compliance: robust data governance.
When all regulatory roads lead to the same data governance destination, how can enterprises effectively navigate a path forward without running off course?
Out with the old
As a starting point, organisations need to get a handle on their data. This means understanding what data they have stored, where this information is stored, and who has access to this information.
It also requires a granular understanding of the type of data that they hold. For example, is this data a collection of sales figures and numbers – or is this personal data with Personally Identifiable Information (PII), such as scanned passport images, names, and home addresses?
The latter type of data could run afoul of any number of data privacy regulations. DORA’s laser focus on finance could pose issues if it’s found that customers who closed their bank or brokerage accounts didn’t have their data deleted after the usual five-to-seven-year retention period. (Even GDPR, to pluck another regulatory example from the pile, could be problematic: It famously gives consumers the right to have their data removed from systems).
A key piece of governance advice here is to ensure transparency around any retention policies and to clearly communicate them to all staff. Knowing what the retention policies for various types of data actually are – instead of having them buried in a dusty binder or hidden away on the company intranet – is the first step to making sure that information isn’t accidentally mishandled and that it isn’t being kept longer than strictly necessary.
Centralising content is another good preventative step. Bringing all the documents, emails, and other files together ensures that enterprises have “the full picture” in terms of what's contained within the organisation, and it has the added benefit of identifying any data that is either irrelevant, or no longer necessary, and can be disposed of.
Make “High Risk” more manageable
Data scattered across multiple systems isn’t just a potential landmine for DORA; it also could present problems with the EU AI Act. While the requirements within the act focus on governance of information that's been stored in systems, they also focus on the robustness of those systems themselves.
Specifically, it categorises AI systems by risk: Minimal, Limited, High-Risk, or Unacceptable Risk. For their part, enterprises need to do an honest accounting: In the event of a cyber-attack, what are the risk levels around a system that an enterprise is using? Is it likely to be a high point of risk? Does it have strong enough security elements to protect itself from an intruder?
The advice here for enterprises that want to harden their defenses around High-Risk systems is to ensure that they’re centralising their content in a system designed for a zero-trust framework, a powerful defense against threats.
Instead of assuming that users or devices inside a network are safe, zero trust treats everything as potentially risky, whether it’s inside or outside the company. Additionally, every access request must be verified, authenticated, and authorized before it’s allowed. This is an instance where cybersecurity and data governance go hand in hand: You can’t properly govern data if it isn’t centralised and properly secured.
Nip AI bias in the bud
The NIST AI Risk Management Framework has a different focus, but data governance plays just as important a role in satisfactorily addressing its requirements. A voluntary guide from the U.S. government (via NIST), the AI Risk Management Framework aims to help companies use AI responsibly by spotting and managing risks like bias, privacy issues, or safety concerns.
In short, the goal is to build trustworthy AI – meaning it’s fair, transparent, safe, and accountable. None of this is possible without proper data governance.
The integrity of AI outputs hinges entirely on the data it consumes – so, having a centralised repository where all the data has been audited and vetted creates better responses on the AI side. This centralisation also allows organisations to easily identify what data is being used to train the AI model and to identify where any bias or other undesirable element is seeping into the AI.
Don’t downplay data governance
In the end, data governance isn’t just a compliance checkbox – it’s a foundational element for everything from risk management to ethical AI. Whether a new regulation comes from Europe or the United States, the message is clear: if enterprises want to stay ahead of regulatory scrutiny and on the right path, they need to get on board with data governance.