Cybersecurity109 views5 min read

Salesforce Patches Critical AI Flaw That Allowed Data Theft

A critical flaw named ForcedLeak in Salesforce's Agentforce AI platform allowed attackers to steal CRM data via prompt injection, researchers report.

Leo Martinez
By
Leo Martinez

Leo Martinez is a cybersecurity correspondent for Neurozzio, focusing on threat intelligence, malware analysis, and emerging digital security risks. He translates complex technical threats for a broad audience.

Author Profile
Salesforce Patches Critical AI Flaw That Allowed Data Theft

A critical security vulnerability has been identified and patched in Salesforce Agentforce, an artificial intelligence platform for building AI agents. Discovered by cybersecurity firm Noma Security, the flaw, named ForcedLeak, could have allowed attackers to steal sensitive customer data from the Salesforce CRM tool using a sophisticated attack method.

Key Takeaways

  • A critical vulnerability named ForcedLeak (CVSS score: 9.4) was found in Salesforce Agentforce.
  • The flaw allowed attackers to steal sensitive CRM data through an indirect prompt injection attack.
  • The attack exploited the Web-to-Lead functionality and an expired, re-registered domain to exfiltrate data.
  • Salesforce has released patches that enforce a URL allowlist to prevent similar attacks.

Details of the ForcedLeak Vulnerability

Cybersecurity researchers at Noma Security discovered the high-severity flaw on July 28, 2025. The vulnerability specifically impacted organizations utilizing Salesforce Agentforce in conjunction with the platform's Web-to-Lead feature. This feature allows companies to capture sales leads directly from their websites.

The core of the issue lies in a technique known as indirect prompt injection. This type of attack occurs when malicious instructions are hidden within external data that an AI system processes. In this case, the AI agent could not distinguish between legitimate user data and the hidden commands, causing it to perform unauthorized actions.

Understanding Indirect Prompt Injection

Unlike direct prompt injection where a user directly inputs malicious commands, indirect attacks plant these commands in data sources the AI will later access. This could be a webpage, a document, or, as in this case, a form submission. The AI, trusting the data source, executes the hidden instructions without the user's knowledge.

According to Noma Security, the vulnerability highlights a growing challenge in securing AI systems that interact with external data. These systems present a much broader and more complex attack surface than traditional software.

"This vulnerability demonstrates how AI agents present a fundamentally different and expanded attack surface compared to traditional prompt-response systems," said Sasi Levi, security research lead at Noma, in a report on the findings.

How the Attack Was Executed

The attack method detailed by Noma Security involved a five-step process that manipulated the standard workflow for processing new sales leads. The simplicity of the initial steps made the attack particularly difficult to detect.

The process worked as follows:

  1. Malicious Submission: An attacker submits a standard Web-to-Lead form, but embeds malicious instructions within the 'Description' field.
  2. Internal Processing: An employee uses a standard AI query within Agentforce to summarize or process the new lead.
  3. Command Execution: The Agentforce AI executes both the employee's legitimate request and the attacker's hidden instructions.
  4. Data Retrieval: The hidden commands instruct the AI to query the CRM for other sensitive lead information, such as contact details or internal notes.
  5. Data Exfiltration: The AI then sends this sensitive data to an external domain controlled by the attacker.

Clever Use of an Expired Domain

A key part of the attack was how the data was exfiltrated. The attackers identified a Salesforce-related domain that was previously on an allowlist but had since expired. They were able to purchase this domain for as little as $5.

Because the domain was once considered trusted, it bypassed certain security checks. The stolen data was cleverly disguised and transmitted as a PNG image file to this newly attacker-controlled domain, making the transfer less likely to raise alarms.

Attack Vector Weaknesses

Noma Security identified three primary weaknesses that made the ForcedLeak attack possible: weaknesses in context validation, overly permissive AI model behavior, and a Content Security Policy (CSP) bypass that allowed the connection to the external domain.

Salesforce Response and Mitigation

Upon being notified by Noma Security, Salesforce took immediate action to address the vulnerability. The company has since re-secured the expired domain that was used in the proof-of-concept attack.

More importantly, Salesforce has rolled out patches for Agentforce and its related Einstein AI platform. The primary fix involves the strict enforcement of a URL allowlist mechanism. This ensures that the AI agents can only send data to pre-approved, trusted URLs, effectively blocking the exfiltration path used in the ForcedLeak attack.

"Our underlying services powering Agentforce will enforce the Trusted URL allowlist to ensure no malicious links are called or generated through potential prompt injection," Salesforce stated in a security alert. The company described the measure as "a crucial defense-in-depth control against sensitive data escaping customer systems."

Recommendations for Salesforce Users

Salesforce has urged all customers using Agentforce to apply the latest security updates and enforce the Trusted URLs feature. In addition to these immediate steps, Noma Security provides further recommendations for organizations to enhance their AI security posture.

Key recommendations include:

  • Audit Existing Data: Review current and past lead submissions for any suspicious entries that contain unusual instructions or code-like text.
  • Implement Strict Input Validation: Strengthen validation rules for all user-submitted forms to detect and block potential prompt injection attempts.
  • Sanitize External Data: Ensure that all data from untrusted sources is properly sanitized before it is processed by AI systems.

The ForcedLeak vulnerability serves as a critical reminder of the new security challenges posed by generative AI. As these systems become more integrated into business processes, the need for proactive security measures and robust governance becomes increasingly important.

"The ForcedLeak vulnerability highlights the importance of proactive AI security and governance," Levi concluded. "It serves as a strong reminder that even a low-cost discovery can prevent millions in potential breach damages."