- Pro
Reconsidering governance as organization increase internal information sharing
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
(Image credit: Getty Images)
- Copy link
- X
- Threads
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Become a Member in Seconds
Unlock instant access to exclusive member features.
Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.You are now subscribed
Your newsletter sign-up was successful
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
Explore An account already exists for this email address, please log in. Subscribe to our newsletterFor most of the last decade, security teams haven’t had the bandwidth to think much about how they secure customer and employee experience platforms. And that made sense. Collect feedback, generate a report, pass it to a human to act on. The risk profile was low.
But that calculation no longer holds. These platforms now connect directly to HR software systems, CRM databases and compensation engines.
Article continues below You may like-
AI governance under strain: what modern platforms mean for data privacy
-
Trust and judgement: the challenge facing the AI-driven SOC
-
The Human Risk Reckoning: Why security must evolve for an AI-augmented workforce
Chief Security Officer, Qualtrics.
In the world of agentic experience management programs, they have growing access to business-critical operations, and that means they can no longer be treated as simple survey tools.
Given the world we now live in, security teams must consider how they govern these systems that have access to sensitive data.
The exposure surface is bigger than you think
What makes this harder is the sensitivity of the data itself. Customer experience program shape pricing and product decisions. Employee experience programs surface concerns about leadership and workplace safety, feeding directly into HR decisions. But, unlike other cases, it is hard to pinpoint this data as Personally Identifiable Information (PII) or other easy to identify sensitive information.
Then there's the shadow AI problem. Half of employees now use AI tools regularly at work, but only 20% stick to company-approved ones. That means sensitive experience data is already moving through workflows security teams don't know exist, but banning tools outright removes your visibility of the risk rather than eliminating it.
Are you a pro? Subscribe to our newsletterContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.If you're deploying AI in customer-facing environments, these are the areas I'd focus on:
1. What is your platform actually connected to and what decisions does it influence? Most teams have mapped integrations at a technical level. Fewer have mapped the business decisions downstream of those integrations, including automated workflows. If you can't answer this confidently, you have a meaningful gap regardless of your compliance set up.
2. How are you validating the integrity of inputs? Is your feedback data authentic, as well as complete? Could it be manipulated to skew a business outcome? This requires moving beyond standard input validation into intent and anomaly detection.
What to read next-
Beyond the hype: The critical role of security in responsible AI development
-
Maintaining cyber control when AI can act autonomously
-
How AI is reshaping compliance: Why governance still matters
3. How quickly can you detect and act when something goes wrong? As AI systems become more agentic, continuous monitoring isn't optional. You need mechanisms that flag abnormal outputs and allow you to intervene before a misconfigured or manipulated AI agent compounds the problem at scale.
When AI fails in public, it fails fast
Trust with customers is built over years and lost in seconds. I watched a major retailer spend years rebuilding customer trust after a data breach in the early 2010s. This matters even more in an AI world, where people want to be confident the data they’re sharing is protected at the highest levels.
Our research puts numbers to what many security leaders already sense. 53% of consumers say misuse of personal data is their top concern when companies use AI to automate interactions, and this is up eight points in the past year. Two-thirds want personalized experiences, but only 40% think the benefits are worth the privacy trade-offs. Nearly half say they'd share more data if organizations were simply more transparent about how it's used.
In a world that’s actively researching how to use AI, trust in AI solutions is the main driver of adoption for consumers and companies. Managing this trust is going to make or break the companies that are vying for their customers' engagement.
Most organizations have mapped their technical blast radius: which systems connect, which APIs are open, where data flows. Fewer have mapped their business blast radius: the real cost if the data is wrong, biased, or manipulated. When a chatbot hallucinates a refund policy, exposes personal data, or fabricates an answer, it is a brand failure directly in front of customers. One poorly tested AI agent can damage thousands of customer relationships before anyone in the business notices.
The conversation security leaders need to be driving is “how do we monitor it continuously once it's live?”
Security is a commercial factor
Businesses are under real pressure to move fast. Stakeholders want transformation, CX teams want automation, and security teams raising concerns about bias, compliance and data exposure get positioned as the blocker. I understand the frustration on both sides, but the goal should be working together, not one side inheriting the decision.
The organizations getting this right are embedding security into platform defaults, so guardrails are already in place when a team spins up a new integration. Platform vendors need to do more here too: clearer visibility into what's connected, what permissions are active, and when integrations were last reviewed.
I've seen this shift happen in real conversations. Security leaders who can demonstrate rigorous controls, monitoring, validated data practices, and certifications, find that procurement conversations accelerate and timelines compress.
While security has spent decades being framed as a cost center, in a world where experience platforms are AI-powered and connected to the most sensitive operations across a business, that framing is outdated.
When security is visible and credible, employees and customers feel more comfortable sharing their data. This produces sharper AI outputs and builds trust. It all compounds.
LINK!
TOPICS AI Chatbot Assaf KerenSocial Links NavigationChief Security Officer, Qualtrics.
View MoreYou must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
Logout Read more
Pro
AI governance under strain: what modern platforms mean for data privacy
Pro
Trust and judgement: the challenge facing the AI-driven SOC
Pro
The Human Risk Reckoning: Why security must evolve for an AI-augmented workforce
Pro
Beyond the hype: The critical role of security in responsible AI development
Pro
Maintaining cyber control when AI can act autonomously
Pro
How AI is reshaping compliance: Why governance still matters
Latest in Pro
Pro
Beyond the hype: The critical role of security in responsible AI development
Pro
Agentic Search Optimization reshapes brand visibility in AI search
Security
Hackers abuse Apple account notifications to distribute malware
Pro
How to meaningfully measure the effectiveness of cyber resilience
Security
Vercel confirms data breach
Pro
Stop chasing the AI silver bullet
Latest in Opinion
Pro
Beyond the hype: The critical role of security in responsible AI development
Pro
Agentic Search Optimization reshapes brand visibility in AI search
Pro
How to meaningfully measure the effectiveness of cyber resilience
Pro
Stop chasing the AI silver bullet
Pro
How EU organizations can turn sovereign cloud theory into action
Windows
Windows 11 is so bad France is moving to Linux — and many more could follow
LATEST ARTICLES- 1I went inside Mattel and saw how Toy Story 5 toys are built — and made interactive
- 2Has the smartphone hijacked your dinner table? StoryCorps and Prego made this gadget to rescue your dinner convos
- 3What is the release date for Daredevil: Born Again season 2 episode 6 on Disney+?
- 4‘Spicy pillows’ are the new hidden battery danger I found in my home — here’s how to dispose of them safely
- 5I went behind the scenes at Mattel — and its new Woody and Buzz feel like they stepped out of Toy Story