Work focuses on enabling organisations to deploy and scale AI agents with confidence, transparency, and regulatory assurance
London, UK – February 26th, 2026 – Provar, the leader in test automation for Salesforce, has announced a new research collaboration with Imperial College London aimed at addressing one of the most pressing challenges facing enterprises today: how to harness the power of AI in software delivery without compromising transparency, accountability, or compliance.
As organisations across financial services, healthcare, insurance, and other regulated sectors integrate AI into their development and testing processes, they face a growing tension. Traditional testing is deterministic; run the same test twice and you expect the same result. AI-driven approaches introduce adaptability and non-determinism, improving resilience and coverage but also raising important questions around reproducibility, governance, and auditability.
This research initiative is designed to explore how deterministic and adaptive AI approaches can be combined to create testing and assurance frameworks that remain flexible while still meeting the rigorous expectations of regulated environments.
“At Provar, we’re committed to helping our customers adopt AI safely and confidently,” said Ivan Harris, CPTO at Provar. “Regulated organisations can’t afford black boxes, they need clarity, traceability, and trust. Working with Imperial College London allows us to explore this challenge in a rigorous, forward-thinking way and contribute to how responsible AI is applied in real enterprise environments.”
An Imperial College London student is currently supporting Provar’s TrustAI initiative through a dedicated research project focused on regulatory compliance for agentic systems. The work explores how enterprise policies and regulatory rules can be formalised and continuously evaluated to determine whether AI agents remain compliant as their behaviour evolves over time. The goal is to establish practical methods for providing measurable, auditable assurance that AI systems are operating within defined governance and risk thresholds.
A collaboration with long-term industry impact
The research supports emerging thinking at the intersection of AI, compliance, and quality engineering, all areas where academic insight and enterprise needs are rapidly converging. The outcomes are intended to inform both future Provar’s innovation and broader industry best practices for governing AI in complex, regulated technology ecosystems.
We will continue to publish insights and progress on our website: www.provar.com.