Data & AI Series | Technology Leaders

How should technology leaders build secure & trustworthy AI?

Key takeaways

  • Think beyond cybersecurity: ‘Trustworthy AI’ is needed, spanning legal, ethical, privacy, and business risk considerations.
  • Tech leaders are best placed to lead this cross-functional collaboration.
  • AI security practices need to be adapted across the full AI lifecycle.

Insights from the 2025 AI Cyber Benchmark

As AI adoption accelerates, technology leaders face a new class of security challenges, as it introduces new and often unpredictable vulnerabilities beyond traditional IT systems. For technology leaders, the real task is to stay ahead of these risks which requires adapting security practices across the full AI lifecycle.

Drawing on insight from the 2025 Wavestone AI Cyber Benchmark  (opens in a new tab), which considered 20+ clients, this article outlines how leading organizations are tackling emerging AI risks and offers our recommendations along five of the NIST framework pillars: Govern, Identify, Protect, Detect, and Respond.

‘Trustworthy AI’ encompasses legal, ethical, privacy, and business risk considerations – not just cybersecurity.

Florian Pouchet, Partner, Wavestone

1. Govern: Establish integrated, trust-driven oversight

Our benchmark shows 87% of organizations have defined trustworthy AI principles, but only 7% feel confident in their internal AI expertise. This gap shows most leaders have set the ambition, but without expertise and the right operating models, principles remain aspirational rather than actionable.

‘Trustworthy AI’ spans legal, ethical, privacy, and business risk considerations, not just cybersecurity. Turning these broad principles into practice requires an operating model that embeds accountability and consistency across the organization.

Recommendation:

Adopt an Integrated Operating Model for AI governance like 60% of our clients to pool scarce expertise and accelerate capability building and product readiness. This model promotes knowledge sharing, consistency and enables faster maturity.

By contrast, a 10% have adopted a Decentralized Operating model often leading to fragmented efforts, while a Hybrid Operating model (for the remaining 30%) offers flexibility as skills mature and organizational needs shift, but demands strong coordination to avoid fragmentation and misalignment.

The image compares two organizational models for delivering AI client services: the Integrated model and the Decentralized model.

Organizational models for delivering AI client services

2. Identify: Triage risk early and systematically

64% of organizations assessed have put in place an AI security policy that frames how AI can be used, defines the processes to secure AI projects, and sets expectations for third-party providers. In addition, 71% have adapted their project processes for AI, introducing defined roles, responsibilities, and validation steps to ensure risks are reviewed consistently.

Despite this progress, many lack a consistent framework to classify AI risks, leaving gaps in oversight and accountability.

Recommendation:

Systematically triage every AI use case against four dimensions: intended use, data sources, model type and outputs. Then classify each into EU AI Act risk aligned categories, ensuring higher-risk systems face stricter oversight while lower-risk one’s progress with proportionate controls. This structured process enables scalable, consistent, and proportional risk management across the organization.

The image presents a four-step funnel diagram outlining a process for managing AI initiatives

A Process for managing AI initiatives

3. Protect: Adapt existing controls, add AI-specific layers

Protect is the most advanced pillar (40% maturity), showing leaders are building on existing cyber controls such as encryption and access management. But these are insufficient. AI introduces new risks across the stack, from datasets and models to APIs, monitoring, and third-party integrations. There is no one-size-fits-all solution, and the exposure varies depending on the usage of AI in the organizations.

We witness three categories: 100% of our clients are AI Users (ie. consumer of Copilot-type services), 70% are Orchestrators (ie. integrator of AI “building blocks/API” into tailored solutions), and 20% are Creators (ie. developing own grown AI-powered products, including a fined tuned model).

Recommendation:

Technology leaders should map their GenAI architecture and implement protections at every layer: dataset, model, frontend, infrastructure, monitoring, and plug-ins.

  • For Users: secure data flows, enforce third-party assessments, and include contractual protections.
  • For Orchestrators: set criteria for model/platform selection, secure front end security (APIs and UIs), and adopt MLSecOps practices to ensure AI project is secure by design.
  • For Creators: secure the full MLSecOps pipeline when developing and fine-tuning models and deploy advanced measures such as adversarial training, randomized smoothing, synthetic data, and encryption, treating security as a differentiator in the final product.

4. Detect: Scale testing and detection as maturity grows

While 64% of organizations run penetration tests, only 7% assess model robustness leaving AI-specific vulnerabilities untested. Threats span the lifecycle and flaws are common such as misconfigured ML platforms, insecure APIs, weak DevSecOps pipelines, and data leakage. Detection is also fragmented: although 72% collect AI logs, just 13% monitor them through SOCs, meaning anomalies are often recorded but not acted upon.

Without lifecycle-wide testing and SOC integration, organizations remain blind to critical AI vulnerabilities.

Recommendation:

Technology leaders should combine two pillars of detection:

  1. First, penetration testing with AI red teaming. Go beyond classic penetration tests to stress-test models across the lifecycle for poisoning, extraction, manipulation, and prompt injection. Use LLMs to attack LLMs and uncover real weaknesses.
  2. Then, integrate AI systems into global detection. Feed AI logs and telemetry into SOCs to surface abnormal behavior across both models and platforms.

This dual approach moves organizations from ad-hoc testing to continuous, enterprise-wide detection.

Only 9%

of organizations currently have AI-specific incident response capabilities, reflecting an industry still in early stages.”(1)

5. Respond: Prepare for AI-specific incidents

Rest assured, only 9% of organizations currently have AI-specific incident response capabilities, reflecting an industry still in early stages- even within government, where initiatives like the U.S. AISIRT are still defining standards.

Without AI-specific processes, organizations are unprepared for model failures, poisoning, adversarial retraining, or large-scale misuse. Traditional IR teams lack the tools to investigate AI behavior or share intelligence on emerging AI threats.

Recommendation:

Extend existing incident response processes to include AI-specific events:

  • Build forensic capabilities to analyze model behavior
  • Prepare for adversarial attacks and retraining scenarios
  • Join emerging AI-CSIRTS and participate in regulatory simulations

As AI threats become more complex, response readiness will be vital to maintaining trust and resilience.

The wrap: Foster collaboration across the ecosystem and build trust

AI security is not just a technical or cybersecurity concern, it’s a cross-functional collaboration among data scientists, developers, legal teams, risk officers, and compliance leaders.

Technology leaders are best placed to coordinate this effort, aligning governance, embedding trust-by-design, and ensuring joint accountability for AI performance, fairness, and transparency.

By leading now, they lay the foundation for responsible and trustworthy AI adoption.

(1) Source: Wavestone AI Cyber Benchmark, 2025, https://www.wavestone.com/en/insight/2025-ai-cyber-benchmark/ (opens in a new tab)

Data & AI series

Read further articles around scaling AI, AI adoption and AI industrialization.

You can also discover AI client stories, expert profiles & get an overview of AI accelerators to drive your AI transformation.

Author

  • Florian Pouchet

    Partner – UK, London

    Wavestone

    LinkedIn