{"document":{"acknowledgments":[{"urls":["https://kb.cert.org/vuls/id/148244#acknowledgements"]}],"category":"CERT/CC Vulnerability Note","csaf_version":"2.0","notes":[{"category":"summary","text":"### Overview\r\n\r\n[PandasAI](https://pandas-ai.com/), an open source project by  [SinaptikAI](https://sinaptik.ai/), has been found vulnerable to [Prompt Injection](https://en.wikipedia.org/wiki/Prompt_injection) attacks. An attacker with access to the chat prompt can craft malicious input that is interpreted as code, potentially achieving arbitrary code execution. In response, SinaptikAI has implemented specific security configurations to address this vulnerability.\r\n\r\n### Description\r\n\r\nPandasAI is a Python library that allows users to interact with their data using natural language queries. The library parses these queries into Python or SQL code, leveraging a large language model (LLM) (such as OpenAI's GPT or similar) to generate explanations, insights, or code. As part of its setup, users import the AI `Agent` class, instantiate it with their data, and facilitate a connection to the database. Once connected the AI agent can maintain the context throughout the discussion, allowing for ongoing exchanges with the user's queries as prompts.\r\n\r\nA vulnerability was discovered that enables arbitrary Python code execution through prompt injection. Researchers at NVIDIA demonstrated the ability to bypass PandasAI's restrictions, such as preventing certain module imports, jailbreak protections, and the use of allowed lists. By embedding malicious Python code in various ways via a prompt, attackers can exploit the vulnerability to execute arbitrary code within the context of the process running PandasAI. \r\n\r\nThis vulnerability arises from the fundamental challenge of maintaining a clear separation between code and data in AI chatbots and agents. In the case of PandasAI, any code generated and executed by the agent is implicitly trusted, allowing attackers with access to the prompt interface to inject malicious Python or SQL code. The security controls of PandasAI (2.4.3 and earlier) fail to distinguish between legitimate and malicious inputs, allowing the attackers to manipulate the system into executing untrusted code, leading to untrusted code execution (RCE), system compromise, or pivoting attacks on connected services.  The vulnerability is tracked as [CVE-2024-12366](https://www.cve.org/CVERecord?id=CVE-2024-12366). Sinaptik AI has introduced new configuration parameters to address this issue and allow the user to choose appropriate security configuration for their installation and setup. \r\n\r\n### Impact\r\n\r\nAn attacker with access to the PandasAI interface can perform prompt injection attacks, instructing the connected LLM to translate malicious natural language inputs into executable Python or SQL code. This could result in arbitrary code execution, enabling attackers to compromise the system running PandasAI or maintain persistence within the environment. \r\n\r\n### Solution\r\n\r\nSinaptikAI has introduced a Security parameter to the configuration file of the PandasAI project. Users can now select one of three security configurations:\r\n\r\n1.     Standard: Default security settings suitable for most use cases.\r\n2.     Advanced: Higher security settings for environments with stricter requirements.\r\n3.     None: Disables security features (not recommended).\r\n\r\nBy choosing the appropriate configuration, users can tailor PandasAI's security to their specific needs. SinaptikAI has also released a sandbox. More information regarding the sandbox can be found at the appropriate [documentation](https://docs.getpanda.ai/v3/privacy-security) page. \r\n\r\n### Acknowledgements\r\nThank you to the reporter, the NVIDIA AI Red Team (Joe Lucas, Becca Lynch, Rich Harang, John Irwin, and Kai Greshake). This document was written by Christopher Cullen.","title":"Summary"},{"category":"legal_disclaimer","text":"THIS DOCUMENT IS PROVIDED ON AN 'AS IS' BASIS AND DOES NOT IMPLY ANY KIND OF GUARANTEE OR WARRANTY, INCLUDING THE WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR USE. YOUR USE OF THE INFORMATION ON THE DOCUMENT OR MATERIALS LINKED FROM THE DOCUMENT IS AT YOUR OWN RISK. ","title":"Legal Disclaimer"},{"category":"other","text":"CERT/CC Vulnerability Note is a limited advisory. It primarily identifies vendors impacted by the advisory and not specific products. We only support \"known_affected\" and \"known_not_affected\" status. Please consult the vendor's statements and advisory URL if provided by the vendor for more details ","title":"Limitations of Advisory"}],"publisher":{"category":"coordinator","contact_details":"Email: cert@cert.org, Phone: +1412 268 5800","issuing_authority":"CERT/CC under DHS/CISA https://www.cisa.gov/cybersecurity also see https://kb.cert.org/ ","name":"CERT/CC","namespace":"https://kb.cert.org/"},"references":[{"url":"https://certcc.github.io/certcc_disclosure_policy","summary":"CERT/CC vulnerability disclosure policy"},{"summary":"CERT/CC document released","category":"self","url":"https://kb.cert.org/vuls/id/148244"},{"url":"https://www.ibm.com/think/insights/prevent-prompt-injection","summary":"https://www.ibm.com/think/insights/prevent-prompt-injection"},{"url":"https://www.lakera.ai/blog/guide-to-prompt-injection","summary":"https://www.lakera.ai/blog/guide-to-prompt-injection"},{"url":"https://www.lasso.security/blog/prompt-injection","summary":"https://www.lasso.security/blog/prompt-injection"},{"url":"https://www.lepide.com/blog/what-is-prompt-injection-attacks-and-how-to-prevent-them/","summary":"https://www.lepide.com/blog/what-is-prompt-injection-attacks-and-how-to-prevent-them/"}],"title":"PandasAI interactive prompt function can be exploited to run arbitrary Python code through prompt injection, which can lead to remote code execution (RCE)","tracking":{"current_release_date":"2025-02-11T12:41:49+00:00","generator":{"engine":{"name":"VINCE","version":"3.0.35"}},"id":"VU#148244","initial_release_date":"2025-02-11 12:41:49.024312+00:00","revision_history":[{"date":"2025-02-11T12:41:49+00:00","number":"1.20250211124149.1","summary":"Released on 2025-02-11T12:41:49+00:00"}],"status":"final","version":"1.20250211124149.1"}},"vulnerabilities":[{"title":"PandasAI uses an interactive prompt function that is vulnerable to prompt injection and run arbitrary Python code that can lead to Remote Code Execution (RCE) instead of the intended explanation of the natural language processing by the LLM.","notes":[{"category":"summary","text":"PandasAI uses an interactive prompt function that is vulnerable to prompt injection and run arbitrary Python code that can lead to Remote Code Execution (RCE) instead of the intended explanation of the natural language processing by the LLM."}],"cve":"CVE-2024-12366","ids":[{"system_name":"CERT/CC V Identifier ","text":"VU#148244"}]}],"product_tree":{"branches":[]}}