Cybersecurity researchers have identified three significant security vulnerabilities in Google's Gemini AI assistant. The flaws, now patched by Google, could have allowed attackers to steal user data and compromise cloud resources through sophisticated prompt injection techniques.
The discovery was made by security firm Tenable, which detailed how different components of the Gemini suite were susceptible to attacks that could expose private information, including user location and saved data. The findings highlight the emerging security challenges associated with large-scale AI systems.
Key Takeaways
- Three distinct security flaws, collectively named the "Gemini Trifecta," were found in Google's AI assistant.
- The vulnerabilities could have led to the theft of personal data, location information, and the compromise of cloud services.
- The attack methods involved various forms of prompt injection, a technique used to manipulate AI behavior.
- Google has since addressed and patched all three vulnerabilities following a responsible disclosure process.
Understanding the Gemini Trifecta Vulnerabilities
The set of three vulnerabilities was discovered by Tenable security researcher Liv Matan. They targeted separate parts of the Gemini ecosystem: the Search Personalization Model, Gemini Cloud Assist, and the Gemini Browsing Tool. Each flaw used a different method to trick the AI into performing unauthorized actions.
These attacks demonstrate how AI systems can be manipulated to become tools for data exfiltration. According to Tenable, the core issue was the AI's inability to distinguish between legitimate user instructions and malicious prompts injected by an attacker.
Flaw 1: Search Personalization Model Injection
The first vulnerability involved a search-injection flaw in Gemini's Search Personalization model. This feature uses a person's search history to provide more relevant responses. Attackers could have exploited this by manipulating a user's Chrome search history using JavaScript.
By injecting malicious prompts into the search history, an attacker could trick Gemini into leaking a user's saved information and location data. The model was unable to differentiate between genuine user queries and the commands injected from an external source, creating a significant privacy risk.
What is Prompt Injection?
Prompt injection is an attack where a malicious actor provides specially crafted input to an AI model to make it ignore its previous instructions and follow the attacker's commands instead. This can cause the AI to reveal sensitive information, generate harmful content, or perform actions it was not designed to do.
Flaw 2: Gemini Cloud Assist Log Injection
The second flaw was a prompt injection vulnerability in Gemini Cloud Assist, a tool designed to help developers manage their cloud environments. This tool can summarize raw logs from various Google Cloud services, such as Cloud Run and App Engine.
An attacker could conceal a malicious prompt within the User-Agent header of an HTTP request. When Gemini Cloud Assist processed the log containing this header, it would execute the hidden command. This could have allowed an attacker to query for sensitive cloud resource information, such as identity and access management (IAM) misconfigurations.
"One impactful attack scenario would be an attacker who injects a prompt that instructs Gemini to query all public assets, or to query for IAM misconfigurations, and then creates a hyperlink that contains this sensitive data," Matan explained in the report. "This should be possible since Gemini has the permission to query assets through the Cloud Asset API."
Flaw 3: Browsing Tool Data Exfiltration
The third vulnerability was an indirect prompt injection flaw affecting the Gemini Browsing Tool. This feature allows the AI to access and summarize the content of web pages to answer user questions. An attacker could exploit this by tricking a user into having Gemini access a malicious webpage.
The malicious page would contain hidden instructions for the AI. When Gemini processed the page, it would execute these instructions, which could command it to gather the user's saved information and location data. The AI would then embed this private data into a request sent to an external server controlled by the attacker, effectively stealing the information without the user's knowledge.
Google's Response and Mitigation
Following Tenable's responsible disclosure, Google implemented fixes for the vulnerabilities. For the Cloud Assist flaw, Google has stopped rendering hyperlinks in responses from log summarizations, preventing attackers from easily exfiltrating data. The company has also added further hardening measures to protect against prompt injection across the Gemini suite.
Broader Implications for AI Security
The Gemini Trifecta serves as a critical reminder of the security risks inherent in increasingly complex AI systems. As AI models are granted more access to personal data and system resources, they become more valuable targets for attackers.
Liv Matan emphasized the shift in the threat landscape. "The Gemini Trifecta shows that AI itself can be turned into the attack vehicle, not just the target," he stated. "As organizations adopt AI, they cannot overlook security."
The report also pointed to similar attack vectors being explored by other researchers. Cybersecurity firm CodeIntegrity recently detailed an attack that abuses Notion's AI agent. In that scenario, attackers hid malicious prompts in a PDF file using white text on a white background. When the AI processed the document, it followed the hidden instructions to collect and send confidential data to the attackers.
Experts argue that traditional security measures like role-based access control (RBAC) may not be sufficient to protect against AI agents with broad access to data. An agent capable of chaining tasks across different documents and services creates a vastly expanded threat surface that requires new security strategies and constant vigilance.





