An recent examination has unveiled that Google’s Gemini for Workspace, a adaptable AI aide integrated across various Google products, is prone to indirect prompt injection attacks.
These weaknesses enable malevolent third parties to manipulate the aide to generate deceiving or unintended responses, escalating significant worries about the dependability and reliability of the information generated by this chatbot.
Gemini for Workspace is crafted to enhance productivity by fusing AI-powered tools into Google products like Gmail, Google Slides, and Google Drive.
However, Hidden Layer researchers have showcased via detailed proof-of-concept instances that attackers can leverage indirect prompt injection weaknesses to undermine the integrity of the responses produced by the aimed Gemini instance.
One of the most alarming facets of these weaknesses is the capability to execute phishing assaults.
For example, assailants can formulate malicious emails that, when processed by Gemini for Workspace, provoke the aide to showcase deceiving messages, like counterfeit alerts about jeopardized passwords and directives to visit malevolent websites to reset passwords.
Furthermore, researchers have exhibited that these vulnerabilities extend beyond Gmail to other Google products.
For instance, in Google Slides, attackers can infuse malicious payloads into speaker notes, leading Gemini for Workspace to generate synopses that encompass unintended content, including the lyrics to a celebrated song.
The scrutiny also unveiled that Gemini for Workspace in Google Drive functions analogously to a standard RAG (Retrieve, Augment, Generate) instance, granting attackers to cross-inject documents and manipulate the aide’s results.
This signifies that assailants can distribute malicious documents with other users, undermining the integrity of the responses produced by the aimed Gemini instance.
In spite of these discoveries, Google has categorized these vulnerabilities as “Intended Behaviors,” indicating that the corporation does not consider them as security concerns.
Nevertheless, the implications of these vulnerabilities are noteworthy, notably in sensitive contexts where the dependability and reliability of information are essential.
The detection of these vulnerabilities spotlights the significance of being watchful when utilizing LLM-powered tools. Users must be informed of the feasible risks linked with these tools and implement required precautions to safeguard themselves from malevolent attacks.
As Google proceeds to introduce Gemini for Workspace to users, it is vital that the firm rectifies these vulnerabilities to safeguard the integrity and reliability of the information produced by this chatbot.
The post Google’s Gemini for Workspace Prone to Prompt Injection Attacks appeared first Cyber Security News.