
This week, Cognizant and Answer Digital welcomed technology implementors and leaders from across healthcare and critical national services into the Answer Digital Leeds Tech Hub for a joint event to discuss: Embedding Agentic AI into Modern DevSecOps.
The audience was made up of practitioners; engineers, platform specialists and security leaders working directly with DevSecOps tooling in complex, regulated environments. This honest conversation was grounded on the organisational and operational realities of AI maturing from concept and aspiration to active participant in secure delivery.
The takeaway
Three insights stood out:
- Agentic AI is no longer optional in DevSecOps, it is a business-critical tool and capability.
- Human leadership and guardrails must remain central, but access and control models must be reinvented.
- Start small, start now, if you wait for fully mature you will be left behind. Today’s value comes from focused domain-specific agent use cases.
One of the most resonant and recurring discussion points was the recognition that we are witnessing the latest, and perhaps most accelerated, cycle in the evolution of engineering tools. Our panel and audience noted that every major leap in technology was initially met with hesitation before becoming a mandatory skill.
Cognizant and Answer Digital in Leeds
Leeds city has become a serious hub for health tech, data and secure engineering capability, and the partnership with Cognizant and Answer Digital reflects that mix of global scale and pragmatic, delivery-first thinking.
The panel was chaired by Manuel Reyes, Chief Architect and Public Sector Director at Cognizant, and introduced by Richard Pugmire, CEO of Answer Digital. Alex Woodhead, Head of Engineering at Answer Digital, and Matthew Moore, Lead DevSecOps Engineer at Answer Digital, joined as panellists.
Richard opened with a line he had heard earlier in the day: “You would not recruit an accountant without Excel skills. In today’s DevSecOps, AI skills are a must-have.” That set the tone. Agentic AI is not an experiment on the sidelines anymore. It is becoming foundational.
Manuel framed the context clearly. DevSecOps was built to make security everyone’s responsibility. But it still relies heavily on human judgement, manual triage, and fragmented tooling. As delivery accelerates and the threat landscape evolves, that model is straining. Agentic AI challenges the foundations by introducing intelligence that can assess risk, prioritise actions and intervene independently.
Putting the hype to one side, what does that mean in practice?
Why embed agentic AI at all?
The panel was unanimous on one thing: this is about meeting business demand for safe, secure, faster delivery. That is the measure of value.
It was also unanimous on: The genie is out of the bottle. Because software delivery is accelerating and threat landscapes are evolving so rapidly, the traditional human-heavy model is becoming unsustainable. Waiting for "fully mature" solutions is a trap, by the time you feel the technology is "ready," your organisation will already be left behind.
Traditionally, DevSecOps surfaces alerts and a human responds. Agentic AI shifts that dynamic from alerting to resolving. An agent can analyse an issue, propose or even implement a remediation. But, and this was stressed repeatedly, a human must still own the outcome and validate that the resolution is correct and appropriate.
Modern platforms generate huge volumes of telemetry. No human can effectively correlate all that observability data in real time. Agents can. They do not sleep, and they do not suffer from alert fatigue. There’s real value in that continuous vigilance.
There’s also value for engineers. Faster triage and remediation mean faster return to creative building where human teams excel. The goal is not replacing engineers; the value is increasing safe velocity.
Human in the loop, or AI in the loop?
One of the most interesting discussions challenged the framing of 'Human in the Loop' framing, which implies AI leads and humans merely rubber-stamp outcomes. They instead explored 'AI in the Loop': using agents to accelerate human activity by handling the 'cognitive heavy lifting' to significantly reduce the frictions that often slow down delivery. Instead of a developer being blocked by a generic policy violation at the build gate, an agent can autonomously triage the failure, perform a reachability analysis, and suggest the exact dependency upgrade needed to pass. This allows the human to remain the 'creative lead' while the AI acts as a high-velocity junior developer that maintains security standards
What does “secure by default” mean at scale?
As AI systems start making decisions and acting at speed, “secure by default” cannot just be a slogan.
The panel discussed moving away from large, general-purpose models towards focused LLMs and task-specific agents. Constrained scope reduces risk. A narrowly defined agent and model, built to perform a single class of action, is easier to govern, audit and assure.
"To achieve 'secure by default' at scale, we must move from general-purpose models to constrained, task-specific agents. Imagine a 'Compliance-Verification Agent' whose sole expertise is your organisation’s specific hardening standards. By narrowing the agent's scope to only validating code against those specific benchmarks, governance becomes easier to audit and assure, turning 'check-box compliance' into a continuous, machine-speed reality."
In an agentic world, Access and control models designed around humans must also evolve to a model of Zero Standing Privileges. Currently, many DevSecOps service accounts hold broad, permanent permissions. Agentic AI allows for 'Just-in-Time' (JIT) privileges, where an agent is granted identity and access only for the duration of a specific task, such as patching a single repository, effectively shrinking the attack surface of the software supply chain."
That shift in access design is where governance begins.
Governance, audit, and new roles
Auditability was described as essential, not just for compliance, but for building quantitative confidence. Without evidence, organisations risk “false assurance”: believing a system is secure without proof.
Confidence in agentic audit and governance isn't granted; it must be earned through a phased transition of responsibility. The panel discussed how organisations can avoid the "all-or-nothing" trap by implementing a Shadow -> Copilot -> Autopilot progression. This ensures the necessary data to validate model accuracy before the AI is granted the ability to act.
- Shadow Mode: Agents observe and log intended actions without executing them. This creates a "Reasoning Trace" for humans to verify
- Copilot Mode: The agent generates actions that remains behind a Human-in-the-Loop gate. Every action requires human guardrails to review agent intent before execution and validate outputs afterwards.
- Autopilot Mode: The agent executes autonomously for specific task within a defined Blast Radius, such as non-production environments or specific low-risk repositories, with “pre-action authoriser” and “post-action validation” agents managing the guardrails.
There was also discussion on model drift, the phenomenon where models become less accurate over time as data changes. This introduces the need for human and automated assurance monitors that track success rates and trigger re-certification when accuracy and performance deviates from agreed and continually measure KPIs.
This introduces the need for a new human role in DevSecOps. The panel floated titles like “Agent Safety Officer,” someone responsible for overseeing a coordinated network of specialised agents working together, rather than one giant “Black Box” AI. Ensuring governance is proportionate and meaningful
So what should organisations do now?
The advice was clear: start.
There will never be a fully mature, final state. Continuous and exponential change means waiting for perfect solutions will leave organisations behind. That does not mean chaos. Adoption should be measured and risk aware. Identify targeted use cases where value is clear.
One example of what organisations can do today is Agentic Vulnerability Triage & Remediation. Today’s SAST/DAST DevSecOps tools scan and often flood alerts and false positives. With Agentic AI Instead of just scanning, an agent analyses CVEs, another checks if the vulnerable function is being called in your code, and another creates a Pull Request (PR) with the fix. This doesn't just fix bugs, it eliminates the 'alert fatigue' that currently defines and de-optimises many DevSecOps functions. A benefit to teams and a value generator by reducing Mean Time to Remediate (MTTR) from days to minutes.
Start small. Build confidence. Use data to avoid false assurance.
Changing skills and operating models
Agentic AI will reshape team dynamics.
Efficiency gains do not necessarily mean shrinking teams. In practice, demand for delivery continues to rise. What changes is the nature of work. The panel described a shift from 90% coding and 10% fixing, to 10% coding or prompting and 90% refinement, oversight, and system-level thinking.
There was also a thoughtful conversation about the future of junior engineers. If AI oversees boilerplate, how do newcomers learn? The answer circles back to AI in the loop: AI as an accelerator, not a replacement. Human judgement, creativity, and accountability remain central. Junior roles will shift toward "Validation and Review" earlier in their careers, making code literacy even more critical than code generation.
Thanks and closing
It was a candid, practical discussion, exactly the kind that the Leeds tech community benefits from and excels in. Thanks to Answer Digital for partnering with us, to Manuel for steering the conversation, and to Alex and Matthew for bringing real delivery insight from the front line.
If you are exploring how agentic AI can strengthen your DevSecOps capability in regulated environments, we would be glad to continue the conversation.