{"document":{"acknowledgments":[{"urls":["https://kb.cert.org/vuls/id/649739#acknowledgements"]}],"category":"CERT/CC Vulnerability Note","csaf_version":"2.0","notes":[{"category":"summary","text":"### Overview\r\nRetell AI's API creates AI voice agents that have excessive permissions and functionality, as a result of insufficient amounts of guardrails. As a result, attackers can exploit this and conduct large scale social engineering, phishing, and misinformation campaigns. \r\n\r\n### Description\r\nRetell AI offers an [API ](https://docs.retellai.com/api-references/create-phone-call)that can create human sounding voice agents that can then be tasked to perform various business operations, respond to questions, and be automated to complete various other voice related tasks. [Retell AI](https://www.retellai.com/) uses OpenAI’s GPT 4o and 5 models for these conversations, and users can  configure agents with minimal prompt engineering.\r\n\r\nHowever, Retell AI’s lack of sufficient guardrails causes the LLM to respond in unexpected ways and deliver malicious outputs. Guardrails are an important mechanism in LLMs that filter inputs and outputs to ensure models are behaving in intended ethical ways. Retell AI permits voice AI agents to have over-permissive autonomy with the lack of guardrails. This is known as Excessive Agency. Malicious actors need minimal resources and technical knowledge to induce trust, extract data, and conduct large scale phishing operations using Retell AI products. \r\n\r\n### Impact\r\nThe vulnerability targets Retell AI’s ease of deployment and customizability to perform scalable phishing/social engineering attacks. Attackers can feed publicly available resources as well as some instructions to Retell AI’s API to generate high-volume and automated fake calls. These fake calls could lead to unauthorized actions, security breaches, data leaks, and other forms of manipulation.\r\n\r\n### Solution\r\nRetell AI has not released a statement, and coordinated disclosure was attempted. Users should be aware and follow security best practices when speaking to an AI voice agent and avoid sensitive data input. Developers should limit functionality and permissions through instating sufficient guardrails and implement manual human approval for high-risk or high volume tasks. \r\n\r\n### Acknowledgements\r\nThanks to the reporter, Keegan Parr, for the report. The reporters disclosure is available here: https://haxor.zip/ This document was written by Ayushi Kriplani.","title":"Summary"},{"category":"legal_disclaimer","text":"THIS DOCUMENT IS PROVIDED ON AN 'AS IS' BASIS AND DOES NOT IMPLY ANY KIND OF GUARANTEE OR WARRANTY, INCLUDING THE WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR USE. YOUR USE OF THE INFORMATION ON THE DOCUMENT OR MATERIALS LINKED FROM THE DOCUMENT IS AT YOUR OWN RISK. ","title":"Legal Disclaimer"},{"category":"other","text":"CERT/CC Vulnerability Note is a limited advisory. It primarily identifies vendors impacted by the advisory and not specific products. We only support \"known_affected\" and \"known_not_affected\" status. Please consult the vendor's statements and advisory URL if provided by the vendor for more details ","title":"Limitations of Advisory"}],"publisher":{"category":"coordinator","contact_details":"Email: cert@cert.org, Phone: +1412 268 5800","issuing_authority":"CERT/CC under DHS/CISA https://www.cisa.gov/cybersecurity also see https://kb.cert.org/ ","name":"CERT/CC","namespace":"https://kb.cert.org/"},"references":[{"url":"https://certcc.github.io/certcc_disclosure_policy","summary":"CERT/CC vulnerability disclosure policy"},{"summary":"CERT/CC document released","category":"self","url":"https://kb.cert.org/vuls/id/649739"},{"url":"https://www.retellai.com/","summary":"https://www.retellai.com/"},{"url":"https://docs.retellai.com/api-references/create-phone-call","summary":"https://docs.retellai.com/api-references/create-phone-call"},{"url":"https://haxor.zip/","summary":"https://haxor.zip/"}],"title":"Lack of Sufficient Guardrails Lead to Excessive Agency (LLM08) in Some LLM Applications","tracking":{"current_release_date":"2025-11-24T14:53:11+00:00","generator":{"engine":{"name":"VINCE","version":"3.0.35"}},"id":"VU#649739","initial_release_date":"2025-11-24 14:19:56.453301+00:00","revision_history":[{"date":"2025-11-24T14:53:11+00:00","number":"1.20251124145311.2","summary":"Released on 2025-11-24T14:53:11+00:00"}],"status":"final","version":"1.20251124145311.2"}},"vulnerabilities":[{"title":"System prompt design flaw in Retell AI allows remote attackers to impersonate humans and conduct social engineering via crafted agent instructions.","notes":[{"category":"summary","text":"System prompt design flaw in Retell AI allows remote attackers to impersonate humans and conduct social engineering via crafted agent instructions."}],"ids":[{"system_name":"CERT/CC V Identifier ","text":"VU#649739"}]},{"title":"Business logic error in agent activity handling in Retell AI allows remote attackers to perform malicious actions such as phishing and social engineering via unvalidated system prompts.","notes":[{"category":"summary","text":"Business logic error in agent activity handling in Retell AI allows remote attackers to perform malicious actions such as phishing and social engineering via unvalidated system prompts."}],"ids":[{"system_name":"CERT/CC V Identifier ","text":"VU#649739"}]}],"product_tree":{"branches":[]}}