CyberUK 2026 didn’t feel like a typical industry conference. It felt heavier. More urgent. There was an unmistakable undercurrent of concern about where we are heading – not just as a profession, but as a nation. Two themes dominated almost every conversation: AI and doing the basics well. Not as distant possibilities, but as present and emerging forces creating new vulnerabilities, new attack paths, and systemic risk at scale.

This wasn’t theoretical. It was a call to arms.

A perfect storm is forming

National Cyber Security Centre (NCSC) CEO Richard Horne’s opening speech captured the mood with striking clarity. He described the current environment as a “perfect storm” – a convergence of accelerating technology, expanding attack surfaces, and rising geopolitical tension.

And that geopolitical dimension matters. It is impacting the UK in very real terms. Cyber is now one of the primary arenas where global tensions play out. Nation states, organised crime, and opportunistic attackers are exploiting the same systemic weaknesses. But perhaps the most uncomfortable part of his message was closer to home.

Horne highlighted that too many organisations are still “not patching with urgency”. Despite years of breaches and lessons learned, known vulnerabilities remain exposed far longer than they should.

The issue is no longer awareness. It is action. And in his words, many are still “failing to grasp the nettle.”

Dan Jarvis: A new model for national cyber defence

The UK’s Minister of State for Security Dan Jarvis built on this with a clear focus on leadership, accountability, and national coordination. At the centre of this is the pledge – a visible commitment for organisations to take cyber security seriously at the highest levels. It reinforces that cyber is not just a technical issue, but a board-level responsibility.

But importantly, this wasn’t just about words.

Jarvis highlighted practical investment to support small and medium-sized enterprises (SMEs) in achieving Cyber Essentials (CE). This is critical. If we are serious about national resilience, we cannot leave smaller organisations behind – they are often the weakest link in supply chains and the most exposed. One of the most significant ideas was the concept of a National-Scale AI Cyber Defence Capability. This represents a step change. The vision is not just better tools or more investment, but a fundamentally different model:

  • AI-driven defence operating at national scale
  • A new model of collaboration between government, industry, and security providers
  • Access to sovereign, classified intelligence to inform defensive posture
  • A higher, more consistent standard of cyber security across the UK.

This is about moving from fragmented, organisation-by-organisation defence to something more coordinated, intelligent, and adaptive. It recognises a critical reality:  Attackers already operate at scale. Defence must do the same. The opportunity here is powerful – combining AI, threat intelligence, and shared capability to detect and respond faster than any single organisation could alone.

But it also raises an important challenge. How do we ensure this capability doesn’t just protect the top tier – government and critical infrastructure – but actually raises the baseline across the entire UK economy?

Margaret Heffernan: Wilful blindness in a new era

Professor of Practice at the University of Bath Margaret Heffernan’s session on wilful blindness provided a human lens on the problem. Organisations often fail not because they lack information. They fail because they choose not to act. It is still their responsibility, even if they didn’t know about the problem – if they could have and/or should have known about a problem – they still have the responsibility to fix the problem. In cyber, this is painfully familiar:

  • Known vulnerabilities left unpatched
  • Risks accepted without understanding the impact
  • Legacy systems tolerated
  • False confidence in controls.

This connects directly to both Horne and Jarvis.

We are building more advanced capabilities – AI defence, national collaboration, intelligence sharing – but at the same time, many organisations are still not addressing the basics. In an AI-driven threat landscape, wilful blindness becomes a force multiplier for attackers.

But Heffernan didn’t stop at the diagnosis. She challenged leaders to rethink how they respond.

The solution is not simply more process, more reporting, or more compliance. Instead, she pointed towards the need for Out-of-the-box thinking on solutions and seeing them from a different perspective allowing people who might not have a direct technical understanding of how to solve problems to get involved and allowing environments where problems can be surfaced early without fear.

This requires leaders to enable action rather than unintentionally constraining it.

This is critical, because in many organisations, the barriers to fixing vulnerabilities are not technical – they are organisational:

  • Competing priorities
  • Fear of disruption
  • Complex governance
  • Lack of clear accountability.

I believe her message reframes the issue: Cyber resilience is as much about leadership behaviour and culture as it is about technology. If leaders do not create the conditions for action – if they don’t allow teams to challenge, to prioritise, and to move quickly – then wilful blindness persists, even in well-funded organisations. In the context of AI and rapidly evolving threats, this becomes even more important. The organisations that succeed will not just be those with the best tools.

They will be the ones where leaders:

  • Encourage innovation in solving security problems
  • Accept short-term disruption for long-term resilience
  • And most importantly, allow action to happen at pace.

AI, Anthropic and OpenAI: The speed of security

Sessions from Jacob Klein, Head of Threat Intelligence at Anthropic, and Amy Burnett, a security researcher at OpenAI, highlighted the rapid evolution of AI in vulnerability discovery and secure software development and were a particular highlight. There is real momentum in:

  • AI-assisted identification of vulnerabilities in code
  • Embedding security directly into development pipelines
  • Automating remediation guidance and fixes.

The direction is clear:  Security is becoming automated, continuous, and embedded.

But this is a race because the same capabilities are available to attackers. We are entering an era where vulnerabilities are discovered faster, exploits are developed faster and attacks are executed faster, which leads to a simple but critical reality. The speed of remediation becomes the defining control.

Despite all the innovation, one issue stood out above all others. We are still not doing vulnerability management well enough, and it is still the weakest link. Patching delays, incomplete visibility, and inconsistent prioritisation remain widespread. This is the gap that attackers exploit. And in the context of a perfect storm, AI-driven threats, geopolitical tension, and expanding digital estates, it becomes a systemic risk. No amount of advanced AI defence will compensate for unpatched, known vulnerabilities.

We must get the balance right between AI capability and human supervision.

AI – at its core – is a probability engine. It makes decisions based on patterns, data, and likelihood of outcomes. That is incredibly powerful for identifying vulnerabilities and suggesting remediation actions at speed. But it also introduces risk.

If left unchecked, AI agents could become chaotic by:

  • Taking remediation actions that have unintended operational impact
  • Misinterpreting context or business criticality
  • Prioritising based on probability rather than real-world consequence
  • Scaling errors just as quickly as they scale solutions.

There is a real danger of “agents running riot” – automating decisions without sufficient governance, oversight, or understanding of business context. So, the challenge is not just adopting AI.  It is governing AI effectively by:

  • Ensuring human-in-the-loop oversight for critical decisions
  • Embedding critical thinking and contextual judgement alongside automation
  • Defining clear guardrails for what AI agents can and cannot do
  • Continuously validating AI-driven actions against real-world outcomes
  • Treating AI as an augmentation of human capability – not a replacement.

The organisations that succeed will not be those that automate everything. They will be those that combine the speed of AI with the judgement of experienced security professionals. In the end, cyber security is not just about identifying and fixing vulnerabilities. It is about making the right decisions at the right time, with the right context. And that still requires humans.

The UK challenge: Raising the baseline

There is also a broader question that CyberUK surfaced. Are we doing enough as a nation?

The UK’s Cyber Resilience Act and similar initiatives are important, but they largely focus on critical sectors. Yet the UK’s resilience depends on far more than critical national infrastructure (CNI). Housing, education, healthcare, manufacturing, retail, SMEs – these organisations form the backbone of the economy, and many remain under-protected. If we truly embrace the idea of national-scale cyber defence, then we must also commit to:

  • Raising the baseline across all organisations
  • Embedding minimum standards of cyber hygiene
  • Ensuring accountability beyond regulated sectors.

Because national resilience is only as strong as its weakest link and costs the UK economy untold amounts of money.

The future: AI defence, self-healing systems, and agent-based threats

Looking ahead, the direction is both exciting and challenging.

We are moving towards:

  • AI-driven vulnerability discovery at scale
  • Automated, intelligent remediation – self-healing software
  • Continuous validation of security controls
  • Defence against AI agent-based attacks.

These agent-based threats – autonomous systems capable of probing, adapting, and exploiting in real time – represent the next evolution of cyber risk.

Defending against them will require:

  • Autonomous detection and response
  • Real-time decision-making
  • Integration of threat intelligence at scale.

In many ways, this aligns directly with Jarvis’ vision of a national AI cyber defence capability.

Final reflection: From call to arms to action

CyberUK 2026 delivered a consistent message:

  • We are in a perfect storm
  • Geopolitics is impacting the UK
  • Organisations are still not patching with urgency
  • Many are failing to grasp the nettle
  • And we now need national-scale, AI-driven defence.

The ideas are there. The technology is emerging. The intent is clear.

The challenge is execution. In the end, cyber resilience will not be defined by strategy, pledges, or even national capabilities alone. It will be defined by something far more fundamental: How quickly we can identify, prioritise, and remediate vulnerabilities – at scale, continuously, and without delay.

And how do we balance agentic AI with human supervision. That is the real battleground. And that is where the future will be won or lost.

Further Insights from Quorum Cyber.

Privacy Preference Center

Skip to content