Patch Watch · AI Security

LiteLLM CVE-2026-42208: Critical Pre-Auth SQL Injection Exposes AI Gateway Secrets

LiteLLM CVE-2026-42208 AI gateway SQL injection incident map
OP
Owen Park
AI security researcher · Updated May 16, 2026, 2:31 AM EDT

Critical LiteLLM CVE-2026-42208 pre-auth SQL injection exposes AI gateway databases, virtual keys and upstream model-provider credentials.

A critical SQL injection vulnerability in LiteLLM Proxy, tracked as CVE-2026-42208, has put AI gateway deployments at risk of database compromise, credential exposure and unauthorized access to upstream model providers. The flaw affects LiteLLM versions 1.81.16 through 1.83.6 and is fixed in 1.83.7 and later, with some security guidance recommending that operators move to newer stable builds such as v1.83.10-stable where available.

The vulnerability sits in LiteLLM Proxy's API-key verification path, where the software checks incoming Authorization: Bearer headers before routing requests to large language model providers. In vulnerable versions, a caller-supplied token value could reach a database query without proper parameterization, allowing an unauthenticated attacker to alter SQL behavior by sending a crafted authorization header to an LLM API endpoint such as POST /chat/completions.

The issue is rated Critical, with CVSS v3.1 9.8 in NVD and CVSS v4 9.3 in public advisory reporting. It is classified as CWE-89: SQL Injection. U.S. cyber authorities added the vulnerability to the Known Exploited Vulnerabilities catalog on May 8, 2026, after exploitation was reported in the wild.

LiteLLM SQL injection response path

Why This LiteLLM Flaw Is High-Risk

LiteLLM is not just another web application component. It is commonly used as an AI gateway that centralizes access to multiple model providers, exposing OpenAI-compatible APIs while managing routing, budgets, virtual keys, provider credentials and proxy configuration.

That makes the backing database unusually sensitive. A successful attack could expose:

  • LiteLLM virtual API keys
  • Master keys
  • Stored provider credentials
  • Proxy configuration
  • Environment-backed secrets
  • Model-provider access tokens
  • Spend-control and routing data

In practical terms, compromise of an internet-facing LiteLLM Proxy could become a compromise of the AI accounts it fronts. The actual blast radius depends on deployment configuration, database permissions, network exposure and what secrets were stored in the proxy.

Threat researchers observed targeted exploitation attempts shortly after public disclosure. The activity was not generic SQL injection scanning; it targeted LiteLLM-specific database structures associated with virtual keys, provider credentials and environment configuration. Public reporting has not established confirmed follow-on abuse of stolen keys in every case, but the targeting showed clear knowledge of LiteLLM internals.

Affected Versions

StatusLiteLLM Version
Affected>=1.81.16, <1.83.7
Vendor wordingv1.81.16 through v1.83.6
Fixed1.83.7 and later
Recommended targetLatest stable patched release, such as v1.83.10-stable where available

Operators running any affected version should upgrade immediately, especially if the proxy was reachable from the internet or any untrusted network.

How the Vulnerability Works

The vulnerable code path is part of proxy API-key verification. LiteLLM receives an HTTP request, reads the Authorization: Bearer value, and checks that token against database records. In affected versions, the token lookup path could mix attacker-controlled input into SQL text instead of passing it as a separate bound parameter.

The fix changes that behavior to use parameterized SQL, where the token is treated as data rather than executable query text.

The vulnerable route can be reached before successful authentication, which is what makes the flaw especially serious. An attacker does not need a valid LiteLLM key to trigger the vulnerable query path.

Safe Lab Reproduction Overview

Reproduction should only be performed in a disposable local lab that you own or are explicitly authorized to test. Do not test this against public or third-party LiteLLM instances.

A safe validation flow is:

  1. Deploy a vulnerable LiteLLM Proxy version in the affected range, such as a pre-1.83.7 build.
  2. Use a test PostgreSQL database with no real provider credentials.
  3. Bind the service to localhost or a private lab network only.
  4. Send a baseline request to an LLM API route such as /chat/completions.
  5. Send a second request with a deliberately malformed bearer value designed to test SQL handling.
  6. Confirm the issue using a non-destructive timing, rejection, or error-handling signal.
  7. Upgrade to 1.83.7+ and verify that the same test no longer changes query behavior.

A safe test should prove whether attacker-controlled input can influence query behavior without dumping tables, extracting secrets or modifying data. Publishing or using credential-dumping payloads and table-extraction commands creates unnecessary operational risk.

What Defenders Should Search For

Operators should review proxy logs, webserver logs and database query history for suspicious authorization headers and SQL-like syntax.

High-signal patterns include:

POST /chat/completions
POST /v1/chat/completions
Authorization: Bearer values containing quotes or SQL metacharacters
Authorization headers containing SQL keywords
Database errors during API-key verification
Unexpected access to LiteLLM virtual-key or credential tables
Python/3.12 aiohttp/3.9.1

Two IP addresses were reported in observed exploitation attempts:

65.111.27.132
65.111.25.67

These should be treated as indicators of attempted exploitation, not definitive attacker identity. Source IPs may represent rented infrastructure, proxies or other disposable egress points.

How to Fix CVE-2026-42208

The primary fix is to upgrade LiteLLM to 1.83.7 or later. The safer operational target is the latest stable patched release available to the organization.

For Python package deployments:

pip install --upgrade "litellm>=1.83.7"

For containerized deployments, move to a fixed LiteLLM image at v1.83.7-stable or later, preferably the newest stable image available, and confirm the running container is actually using the patched image.

After upgrading, restart the proxy and verify the deployed version:

python -c "import litellm; print(litellm.__version__)"

Temporary Mitigation

If an immediate upgrade is impossible, operators should remove public exposure first. Put LiteLLM behind private networking, VPN, mTLS, an authenticated reverse proxy, or trusted IP allowlists while patching is scheduled.

Some advisories also reference configuration-level mitigations intended to reduce exposure of the vulnerable path. Those should be treated only as short-term controls. They do not replace patching, and any exposed vulnerable instance should still be investigated.

Post-Exposure Response

If a vulnerable LiteLLM Proxy was exposed to the internet or an untrusted network, defenders should assume attempted exploitation was possible and take a broader incident-response approach.

Recommended actions:

  • Upgrade immediately to 1.83.7+, preferably the latest stable patched release.
  • Review PostgreSQL query history for suspicious SQL patterns.
  • Search proxy and webserver logs for malicious Authorization headers.
  • Rotate LiteLLM virtual keys and master keys.
  • Rotate upstream provider credentials stored in or reachable through LiteLLM.
  • Review OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, Vertex AI and other provider usage logs for unexpected model calls.
  • Check billing records for abnormal LLM usage.
  • Restrict LiteLLM Proxy behind private networking, VPN, mTLS or a tightly controlled reverse proxy.
  • Add monitoring for SQL metacharacters and SQL keywords in bearer-token fields.
  • Review database user permissions and remove unnecessary write or administrative privileges.

The Bigger Lesson

CVE-2026-42208 shows how AI gateways have become high-value credential concentration points. A single proxy can hold access to multiple paid model providers, internal applications and production workflows. That raises the stakes for classic web vulnerabilities such as SQL injection.

For organizations using LiteLLM in production, the priority is clear: patch first, investigate exposure second, rotate secrets where risk exists and stop exposing AI gateways directly to untrusted networks. The vulnerability is fixed, but any affected internet-facing deployment should be treated as a serious security incident until logs and credentials have been reviewed.