search menu icon-carat-right cmu-wordmark

CERT Coordination Center

Lack of Sufficient Guardrails Lead to Excessive Agency (LLM08) in Some LLM Applications

Vulnerability Note VU#649739

Original Release Date: 2025-11-24 | Last Revised: 2025-11-24

Overview

Retell AI's API creates AI voice agents that have excessive permissions and functionality, as a result of insufficient amounts of guardrails. As a result, attackers can exploit this and conduct large scale social engineering, phishing, and misinformation campaigns.

Description

Retell AI offers an API that can create human sounding voice agents that can then be tasked to perform various business operations, respond to questions, and be automated to complete various other voice related tasks. Retell AI uses OpenAI’s GPT 4o and 5 models for these conversations, and users can configure agents with minimal prompt engineering.

However, Retell AI’s lack of sufficient guardrails causes the LLM to respond in unexpected ways and deliver malicious outputs. Guardrails are an important mechanism in LLMs that filter inputs and outputs to ensure models are behaving in intended ethical ways. Retell AI permits voice AI agents to have over-permissive autonomy with the lack of guardrails. This is known as Excessive Agency. Malicious actors need minimal resources and technical knowledge to induce trust, extract data, and conduct large scale phishing operations using Retell AI products.

Impact

The vulnerability targets Retell AI’s ease of deployment and customizability to perform scalable phishing/social engineering attacks. Attackers can feed publicly available resources as well as some instructions to Retell AI’s API to generate high-volume and automated fake calls. These fake calls could lead to unauthorized actions, security breaches, data leaks, and other forms of manipulation.

Solution

Retell AI has not released a statement, and coordinated disclosure was attempted. Users should be aware and follow security best practices when speaking to an AI voice agent and avoid sensitive data input. Developers should limit functionality and permissions through instating sufficient guardrails and implement manual human approval for high-risk or high volume tasks.

Acknowledgements

Thanks to the reporter, Keegan Parr, for the report. The reporters disclosure is available here: https://haxor.zip/ This document was written by Ayushi Kriplani.

Vendor Information

649739
 

Bland AI Unknown

Notified:  2025-09-23 Updated: 2025-11-24

VU#649739.1 Unknown
VU#649739.2 Unknown

Vendor Statement

We have not received a statement from the vendor.

Lindy Unknown

Notified:  2025-09-23 Updated: 2025-11-24

VU#649739.1 Unknown
VU#649739.2 Unknown

Vendor Statement

We have not received a statement from the vendor.

Retell AI Unknown

Notified:  2025-08-29 Updated: 2025-11-24

VU#649739.1 Unknown
VU#649739.2 Unknown

Vendor Statement

We have not received a statement from the vendor.

Synthflow AI Unknown

Notified:  2025-09-24 Updated: 2025-11-24

VU#649739.1 Unknown
VU#649739.2 Unknown

Vendor Statement

We have not received a statement from the vendor.

Vapi AI Unknown

Notified:  2025-09-23 Updated: 2025-11-24

VU#649739.1 Unknown
VU#649739.2 Unknown

Vendor Statement

We have not received a statement from the vendor.


Other Information

API URL: VINCE JSON | CSAF
Date Public: 2025-11-24
Date First Published: 2025-11-24
Date Last Updated: 2025-11-24 14:53 UTC
Document Revision: 2

Sponsored by CISA.