Most GRC programs fail not because the framework is wrong, but because it’s built to satisfy auditors, not protect businesses. Here’s what that costs you – and how to do it differently.
Most organisations have some version of GRC in place. Policies exist. Compliance boxes get ticked. A risk register lives somewhere in a shared drive, last updated before anyone currently on the team joined. And then a breach happens, or an audit goes badly, and everyone is surprised.
They shouldn’t be.
The most common GRC failure isn’t ignorance of the framework – it’s treating it as an administrative exercise rather than a decision-making system. You end up with documentation that describes security rather than delivers it.
That distinction matters more than most security conversations acknowledge.
Why GRC exists – and why it usually gets implemented wrong
GRC – governance, risk, and compliance – is a system for making security decisions that support a business rather than obstruct it. Not rules imposed from above, but a framework built around what the organisation is actually trying to achieve.
The problem is that most implementations start from the wrong end.
They start with a compliance requirement, reverse-engineer the policies needed to satisfy it, and call that a GRC program.
The risk register gets populated because ISO 27001 requires one, not because anyone is actively using it. Policies get written because auditors want to see them, not because they reflect how the business actually operates.
This is checkbox compliance. And it’s not just ineffective – it’s actively dangerous. It creates the appearance of security maturity without the substance.
When something goes wrong, the documentation says the right things. The reality doesn’t match.
A GRC framework that works starts with a different question: what is this business trying to achieve, and what could prevent it? Risk is the answer. Governance and compliance are how you respond to it.
Start with risk – and be honest about it
Risk sits at the centre of GRC.
Most organisations are far better at documenting risk than they are at being honest about it.
The starting point is asset classification – understanding what you’re actually trying to protect. Critical data, intellectual property, operational systems, customer-facing services.
Not everything is equal, and treating it as if it is means you’ll over-invest in protecting things that don’t matter and under-invest in protecting things that do.
Once assets are mapped, the threat picture becomes clearer.
- Sensitive customer data attracts financially motivated attackers.
- Source code repositories attract competitors and state actors.
- Physical sites face different exposure entirely.
The threats facing each asset differ in nature, method, and likely impact – and that shapes how risk should be assessed.
Risk assessment comes down to two dimensions: impact and likelihood.
How damaging would this event be – financially, operationally, reputationally? And how likely is it to occur given your current controls and environment?
Plot those two on a matrix, score them consistently, and you get a risk level you can act on.
The scoring methodology matters less than its consistent application. An organisation that assesses risk differently each quarter produces data that can’t be trended or compared. The value of a risk register is cumulative – it shows you how risk is changing over time, which controls are working, and where new exposure is emerging.
Risk reviews should happen at least annually for the full register, and quarterly for anything rated high.
Each risk needs a named owner – a senior person responsible for accepting that the risk level is appropriate and for escalating it if it changes. Without ownership, risks get logged and forgotten.
What to do when a control isn’t working
This is where most GRC programs go quiet. A control gets implemented, it gets checked off, and nobody asks whether it actually changed anything.
When a control isn’t working – when the risk level hasn’t moved, or when incidents keep recurring in the same area – the answer isn’t to add more controls. It’s to go back to the risk assessment and ask whether you’ve correctly understood the threat.
Often, the control addresses the symptom rather than the cause. A phishing training module doesn’t solve a culture that punishes people for reporting mistakes. A firewall rule doesn’t fix misconfigured cloud permissions.
Controls should reduce either the likelihood or the impact of a risk. If neither is moving, something is wrong with the control, the assessment, or both.
For technical controls specifically, penetration testing is one of the most direct ways to find out whether a control is actually doing what you think it is.
Why security policies fail – and what to do about it?
Governance is what makes risk management repeatable. Documented policies, defined responsibilities, clear ownership – the infrastructure that ensures security doesn’t live in one person’s head and doesn’t fall apart when someone leaves.
But governance has a failure mode that organisations consistently underestimate: policies that nobody follows.
- Lock down laptops too tightly, and people find workarounds.
- Require complex passwords to be changed every 30 days, and they get written on sticky notes.
- Mandate a slow, bureaucratic approval process for software tools, and teams start using personal accounts for work data.
If staff are bypassing a control, that is not a compliance problem. It is a design problem. The control is wrong. It’s asking people to choose between doing their job and following the rules, and unsurprisingly, they choose their job.
The irony is that overly strict controls often create more risk than they prevent – because the workarounds are almost always less secure than whatever the policy was trying to enforce.
The fix isn’t stricter enforcement. It’s redesigning the control to be compatible with how work actually gets done.
Security that works with people is more effective than security that works against them, even if it looks less rigorous on paper.
Metrics are how you catch this early.
Security training completion rates, phishing simulation results, patch compliance rates, incident trends – these tell you whether the governance framework is working in practice, not just on paper.
Patterns in that data are diagnostic.
If phishing click rates stay flat after multiple training rounds, the training isn’t the solution. If patch compliance drops in one team, there’s a resourcing or tooling problem to fix, not a people problem to escalate.
Incident reporting culture sits underneath all of it
Teams that punish mistakes get underreporting. Underreporting means the same vulnerabilities recur because nobody connected the dots between incidents.
An environment where people report phishing clicks, near misses, and process failures without fear is not a soft environment – it’s one that learns faster than its attackers.
For teams building software, the same principle applies to the development process itself: embedding security from the start is cheaper and more effective than bolting it on later. Here’s what that shift looks like in practice.
How to decide whether to comply with a regulation?
Compliance is the most visible part of GRC, and the most frequently misunderstood. The default assumption is that compliance requirements are obligations to be met. Some are. Many aren’t.
There are three distinct categories worth separating:
- Mandatory compliance – legal requirements that apply to your organisation based on sector, geography, or the nature of the data you handle. GDPR for organisations processing EU personal data. NIS2 and DORA for financial services and critical sectors across the EU. Non-compliance here isn’t a business decision – it’s a legal exposure.
- Commercial compliance – certifications and frameworks that aren’t legally required but open doors. ISO 27001 is the most common example: many enterprise customers and regulated-industry partners won’t sign contracts with vendors who can’t demonstrate it. The compliance decision here is a sales and market access question as much as a security one.
- Voluntary frameworks – standards like NIST or CIS Controls that provide useful structure without any external mandate. The value is in the methodology, not the certification.
Treating all three categories the same way produces bad decisions.
Spending significant resources on a voluntary framework while ignoring a mandatory obligation is a governance failure. So is pursuing an expensive certification that none of your target customers will ever ask for.
Outcome-based regulation changes the calculation
Increasingly, regulators define what good looks like rather than prescribing exactly how to get there.
NIS2 is a clear example – it specifies required capabilities and outcomes across risk management, incident handling, and supply chain security, but leaves implementation to the organisation.
This is good policy design. It acknowledges that a one-size-fits-all technical prescription can’t account for the diversity of organisations in scope. But it shifts the burden onto organisations to genuinely interpret what compliance means for their context, rather than following a checklist.
That interpretation requires security judgment, not just legal review.
A small professional services firm and a hospital group might both fall under NIS2, but the controls that constitute appropriate risk management for each look very different.
Getting that translation right is the work – and it can’t be delegated entirely to a compliance team.
Vendor risk: the risk you let in without realising it
Your own security posture is only part of the picture. Every supplier onboarded, every tool deployed, every third party with any form of access to your systems extends your attack surface. In most organisations, that surface is considerably larger than anyone has formally mapped.
Vendor risk management isn’t about being suspicious of suppliers. It’s about not assuming trust where it hasn’t been established.
The questions worth asking before any significant vendor relationship:
- Do they hold relevant certifications (ISO 27001, Cyber Essentials, SOC 2)?
- Has their product been independently penetration tested, and will they share the findings?
- How do they manage vulnerabilities in their own software and infrastructure?
- What access will they have to your systems – and is that access scoped correctly?
- What happens to your data if the relationship ends?
These aren’t bureaucratic hurdles.
They’re the minimum basis for making an informed decision about the risk a vendor introduces.
An organisation that can’t answer these questions about its critical suppliers has a material gap in its risk picture, regardless of how well-managed its internal controls are.
The loop that most organisations miss
The reason GRC works – when it works – is that governance, risk, and compliance aren’t separate programmes running in parallel. They’re a loop.
Risk assessment drives what governance policies need to exist. Governance structures ensure risks are owned and monitored. Compliance requirements feed back into risk, because falling foul of a regulation is itself a risk with an impact and a likelihood that needs to be assessed and managed.
Break any link in that loop and the system degrades.
Risk managed without governance produces decisions that live in spreadsheets and get forgotten. Governance without risk produces policies disconnected from the actual threats. Compliance without either produces documentation that satisfies auditors and protects no one.
The organisations that do this well aren’t the ones with the most sophisticated tools or the thickest policy libraries. They’re the ones where security decisions are made deliberately, with clear ownership, against a shared understanding of what the business is trying to protect.
That’s a cultural outcome as much as a process one. And it’s harder to fake than any compliance certificate.
Security that works with your business rather than against it starts with the right foundations. Infinum’s security practice helps organisations build GRC frameworks grounded in real risk, not just audit readiness. Explore our cybersecurity services to see where we can help.