Technology

The AI trust advantage: How smarter security wins customer confidence

2026-04-20 13:44
680 views
The AI trust advantage: How smarter security wins customer confidence

CX platforms have more data access due to AI - security teams must reconsider governance.

  1. Pro
The AI trust advantage: How smarter security wins customer confidence Opinion By Assaf Keren published 20 April 2026

Reconsidering governance as organization increase internal information sharing

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

A close up of a person's eyes and face. They are wearing glasses and in one eye there's. a reflection of a digital brain (Image credit: Getty Images)
  • Copy link
  • Facebook
  • X
  • Whatsapp
  • Reddit
  • Pinterest
  • Flipboard
  • Threads
  • Email
Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Become a Member in Seconds

Unlock instant access to exclusive member features.

Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

You are now subscribed

Your newsletter sign-up was successful

Join the club

Get full access to premium articles, exclusive features and a growing list of member rewards.

Explore An account already exists for this email address, please log in. Subscribe to our newsletter

For most of the last decade, security teams haven’t had the bandwidth to think much about how they secure customer and employee experience platforms. And that made sense. Collect feedback, generate a report, pass it to a human to act on. The risk profile was low.

But that calculation no longer holds. These platforms now connect directly to HR software systems, CRM databases and compensation engines.

Article continues below You may like
  • A representational concept of a social media network AI governance under strain: what modern platforms mean for data privacy
  • A robot hand touching a locked digital shield blocking a human from accessing data Trust and judgement: the challenge facing the AI-driven SOC
  • representational image of a cloud firewall The Human Risk Reckoning: Why security must evolve for an AI-augmented workforce
Assaf KerenSocial Links Navigation

Chief Security Officer, Qualtrics.

In the world of agentic experience management programs, they have growing access to business-critical operations, and that means they can no longer be treated as simple survey tools.

Given the world we now live in, security teams must consider how they govern these systems that have access to sensitive data.

The exposure surface is bigger than you think

What makes this harder is the sensitivity of the data itself. Customer experience program shape pricing and product decisions. Employee experience programs surface concerns about leadership and workplace safety, feeding directly into HR decisions. But, unlike other cases, it is hard to pinpoint this data as Personally Identifiable Information (PII) or other easy to identify sensitive information.

Then there's the shadow AI problem. Half of employees now use AI tools regularly at work, but only 20% stick to company-approved ones. That means sensitive experience data is already moving through workflows security teams don't know exist, but banning tools outright removes your visibility of the risk rather than eliminating it.

Are you a pro? Subscribe to our newsletterContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

If you're deploying AI in customer-facing environments, these are the areas I'd focus on:

1. What is your platform actually connected to and what decisions does it influence? Most teams have mapped integrations at a technical level. Fewer have mapped the business decisions downstream of those integrations, including automated workflows. If you can't answer this confidently, you have a meaningful gap regardless of your compliance set up.

2. How are you validating the integrity of inputs? Is your feedback data authentic, as well as complete? Could it be manipulated to skew a business outcome? This requires moving beyond standard input validation into intent and anomaly detection.

What to read next
  • A robot hand touching a locked digital shield blocking a human from accessing data Beyond the hype: The critical role of security in responsible AI development
  • AI robotic workers in an office. Maintaining cyber control when AI can act autonomously
  • A hand reaching out to touch a futuristic rendering of an AI processor. How AI is reshaping compliance: Why governance still matters

3. How quickly can you detect and act when something goes wrong? As AI systems become more agentic, continuous monitoring isn't optional. You need mechanisms that flag abnormal outputs and allow you to intervene before a misconfigured or manipulated AI agent compounds the problem at scale.

When AI fails in public, it fails fast

Trust with customers is built over years and lost in seconds. I watched a major retailer spend years rebuilding customer trust after a data breach in the early 2010s. This matters even more in an AI world, where people want to be confident the data they’re sharing is protected at the highest levels.

Our research puts numbers to what many security leaders already sense. 53% of consumers say misuse of personal data is their top concern when companies use AI to automate interactions, and this is up eight points in the past year. Two-thirds want personalized experiences, but only 40% think the benefits are worth the privacy trade-offs. Nearly half say they'd share more data if organizations were simply more transparent about how it's used.

In a world that’s actively researching how to use AI, trust in AI solutions is the main driver of adoption for consumers and companies. Managing this trust is going to make or break the companies that are vying for their customers' engagement.

Most organizations have mapped their technical blast radius: which systems connect, which APIs are open, where data flows. Fewer have mapped their business blast radius: the real cost if the data is wrong, biased, or manipulated. When a chatbot hallucinates a refund policy, exposes personal data, or fabricates an answer, it is a brand failure directly in front of customers. One poorly tested AI agent can damage thousands of customer relationships before anyone in the business notices.

The conversation security leaders need to be driving is “how do we monitor it continuously once it's live?”

Security is a commercial factor

Businesses are under real pressure to move fast. Stakeholders want transformation, CX teams want automation, and security teams raising concerns about bias, compliance and data exposure get positioned as the blocker. I understand the frustration on both sides, but the goal should be working together, not one side inheriting the decision.

The organizations getting this right are embedding security into platform defaults, so guardrails are already in place when a team spins up a new integration. Platform vendors need to do more here too: clearer visibility into what's connected, what permissions are active, and when integrations were last reviewed.

I've seen this shift happen in real conversations. Security leaders who can demonstrate rigorous controls, monitoring, validated data practices, and certifications, find that procurement conversations accelerate and timelines compress.

While security has spent decades being framed as a cost center, in a world where experience platforms are AI-powered and connected to the most sensitive operations across a business, that framing is outdated.

When security is visible and credible, employees and customers feel more comfortable sharing their data. This produces sharper AI outputs and builds trust. It all compounds.

LINK!

TOPICS AI Chatbot Assaf KerenSocial Links Navigation

Chief Security Officer, Qualtrics.

View More

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Logout Read more A representational concept of a social media network Pro AI governance under strain: what modern platforms mean for data privacy    A robot hand touching a locked digital shield blocking a human from accessing data Pro Trust and judgement: the challenge facing the AI-driven SOC    representational image of a cloud firewall Pro The Human Risk Reckoning: Why security must evolve for an AI-augmented workforce    A robot hand touching a locked digital shield blocking a human from accessing data Pro Beyond the hype: The critical role of security in responsible AI development    AI robotic workers in an office. Pro Maintaining cyber control when AI can act autonomously    A hand reaching out to touch a futuristic rendering of an AI processor. Pro How AI is reshaping compliance: Why governance still matters    Latest in Pro A robot hand touching a locked digital shield blocking a human from accessing data Pro Beyond the hype: The critical role of security in responsible AI development    A person typing on a laptop and using a tablet. Only their upper torso, arms and hands are visible. Text superimposed on the image shows AI Pro Agentic Search Optimization reshapes brand visibility in AI search    Sign in with Apple Button for your privacy. Man holds a smartphone and authorizes the Internet service Security Hackers abuse Apple account notifications to distribute malware    Concept art representing cybersecurity principles Pro How to meaningfully measure the effectiveness of cyber resilience    business cloud Security Vercel confirms data breach    Hands typing on a tablet with AI superimposed in text in front Pro Stop chasing the AI silver bullet    Latest in Opinion A robot hand touching a locked digital shield blocking a human from accessing data Pro Beyond the hype: The critical role of security in responsible AI development    A person typing on a laptop and using a tablet. Only their upper torso, arms and hands are visible. Text superimposed on the image shows AI Pro Agentic Search Optimization reshapes brand visibility in AI search    Concept art representing cybersecurity principles Pro How to meaningfully measure the effectiveness of cyber resilience    Hands typing on a tablet with AI superimposed in text in front Pro Stop chasing the AI silver bullet    A blue digital cloud containing lots of symbols on a dark blue background Pro How EU organizations can turn sovereign cloud theory into action    A finger touching a screen showing the Windows 11 logo next to a laptop screen showing the France flag covered in digits Windows Windows 11 is so bad France is moving to Linux — and many more could follow    LATEST ARTICLES