CX platforms course of billions of unstructured interactions a yr: Survey types, assessment websites, social feeds, name middle transcripts, all flowing into AI engines that set off automated workflows touching payroll, CRM, and fee techniques. No software in a safety operation middle chief’s stack inspects what a CX platform’s AI engine is ingesting, and attackers figured this out. They poison the information feeding it, and the AI does the injury for them.
The Salesloft/Drift breach in August 2025 proved precisely this. Attackers compromised Salesloft’s GitHub atmosphere, stole Drift chatbot OAuth tokens, and accessed Salesforce environments throughout 700+ organizations, together with Cloudflare, Palo Alto Networks, and Zscaler. It then scanned stolen information for AWS keys, Snowflake tokens, and plaintext passwords. And no malware was deployed.
That hole is wider than most safety leaders understand: 98% of organizations have an information loss prevention (DLP) program, however solely 6% have devoted assets, based on Proofpoint’s 2025 Voice of the CISO report, which surveyed 1,600 CISOs throughout 16 international locations. And 81% of interactive intrusions now use reputable entry moderately than malware, per CrowdStrike’s 2025 Risk Looking Report. Cloud intrusions surged 136% within the first half of 2025.
“Most safety groups nonetheless classify expertise administration platforms as ‘survey instruments,’ which sit in the identical threat tier as a undertaking administration app,” Assaf Keren, chief safety officer at Qualtrics and former CISO at PayPal, instructed VentureBeat in a current interview. “It is a large miscategorization. These platforms now hook up with HRIS, CRM, and compensation engines.” Qualtrics alone processes 3.5 billion interactions yearly, a determine the corporate says has doubled since 2023. Organizations can't afford to skip steps on enter integrity as soon as AI enters the workflow.
VentureBeat spent a number of weeks interviewing safety leaders working to shut this hole. Six management failures surfaced in each dialog.
Six blind spots between the safety stack and the AI engine
1. DLP can’t see unstructured sentiment information leaving by normal API calls
Most DLP insurance policies classify structured personally identifiable data (PII): names, emails, and fee information. Open-text CX responses include wage complaints, well being disclosures, and govt criticism. None matches normal PII patterns. When a third-party AI software pulls that information, the export appears to be like like a routine API name. The DLP by no means fires.
2. Zombie API tokens from completed campaigns are nonetheless reside
An instance: Marketing ran a CX marketing campaign six months in the past, and the marketing campaign ended. However the OAuth tokens connecting the CX platform to HRIS, CRM and fee techniques had been by no means revoked. Meaning every one is a lateral motion path sitting open.
JPMorgan Chase CISO Patrick Opet flagged this threat in his April 2025 open letter, warning that SaaS integration fashions create “single-factor express belief between techniques” by tokens “inadequately secured … susceptible to theft and reuse.”
3. Public enter channels haven’t any bot mitigation earlier than information reaches the AI engine
An online app firewall inspects HTTP payloads for an online software, however none of that protection extends to a Trustpilot assessment, a Google Maps ranking, or an open-text survey response {that a} CX platform ingests as reputable enter. Fraudulent sentiment flooding these channels is invisible to perimeter controls. VentureBeat requested safety leaders and distributors whether or not anybody covers enter channel integrity for public-facing information sources feeding CX AI engines; it seems that the class doesn’t exist but.
4. Lateral motion from a compromised CX platform runs by authorised API calls
“Adversaries aren’t breaking in, they’re logging in,” Daniel Bernard, chief enterprise officer at CrowdStrike, instructed VentureBeat in an unique interview. “It’s a legitimate login. So from a third-party ISV perspective, you will have a sign-in web page, you will have two-factor authentication. What else would you like from us?”
The menace extends to human and non-human identities alike. Bernard described what follows: “Unexpectedly, terabytes of information are being exported out. It’s non-standard utilization. It’s going locations the place this person doesn’t go earlier than.” A safety data and occasion administration (SIEM) system sees the authentication succeed. It doesn’t see that behavioral shift. With out what Bernard referred to as "software program posture administration" overlaying CX platforms, the lateral motion runs by connections that the safety staff already authorised.
5. Non-technical customers maintain admin privileges no person opinions
Advertising and marketing, HR and buyer success groups configure CX integrations as a result of they want velocity, however the SOC staff might by no means see them. Safety needs to be an enabler, Keren says, or groups route round it. Any group that can’t produce a present stock of each CX platform integration and the admin credentials behind them has shadow admin publicity.
6. Open-text suggestions hits the database earlier than PII will get masked
Worker surveys seize complaints about managers by identify, wage grievances and well being disclosures. Buyer suggestions is simply as uncovered: account particulars, buy historical past, service disputes. None of this hits a structured PII classifier as a result of it arrives as free textual content. If a breach exposes it, attackers get unmasked private data alongside the lateral motion path.
No one owns this hole
These six failures share a root trigger: SaaS safety posture administration has matured for Salesforce, ServiceNow, and different enterprise platforms. CX platforms by no means received the identical therapy. No one displays person exercise, permissions or configurations inside an expertise administration platform, and coverage enforcement on AI workflows processing that information doesn’t exist. When bot-driven enter or anomalous information exports hit the CX software layer, nothing detects them.
Safety groups are responding with what they’ve. Some are extending SSPM instruments to cowl CX platform configurations and permissions. API safety gateways supply one other path, inspecting token scopes and information flows between CX platforms and downstream techniques. Id-centric groups are making use of CASB-style entry controls to CX admin accounts.
None of these approaches delivers what CX-layer safety really requires: steady monitoring of who’s accessing expertise information, real-time visibility into misconfigurations earlier than they turn out to be lateral motion paths, and automatic safety that enforces coverage with out ready for a quarterly assessment cycle.
The primary integration purpose-built for that hole connects posture administration on to the CX layer, giving safety groups the identical protection over program exercise, configurations, and information entry that they already anticipate for Salesforce or ServiceNow. CrowdStrike's Falcon Protect and the Qualtrics XM Platform are the pairing behind it. Safety leaders VentureBeat interviewed mentioned that is the management they’ve been constructing manually — and dropping sleep over.
The blast radius safety groups should not measuring
Most organizations have mapped the technical blast radius. “However not the enterprise blast radius,” Keren mentioned. When an AI engine triggers a compensation adjustment primarily based on poisoned information, the injury is just not a safety incident. It’s a fallacious enterprise choice executed at machine velocity. That hole sits between the CISO, the CIO and the enterprise unit proprietor. Immediately nobody owns it.
“Once we use information to make enterprise selections, that information should be proper,” Keren mentioned.
Run the audit, and begin with the zombie tokens. That’s the place Drift-scale breaches start. Begin with a 30-day validation window. The AI won’t wait.

