


Perceptive Security
SOC/SIEM Consultancy

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files…
Published:
26 maart 2026 om 23:00:00
Alert date:
27 maart 2026 om 01:02:24
Source:
nvd.nist.gov
Emerging Technologies, Supply Chain & Dependencies
vLLM, an inference and serving engine for large language models, contains a critical vulnerability in versions 0.10.1 through 0.17.x. The vulnerability stems from two model implementation files that hardcode trust_remote_code=True when loading sub-components, bypassing user security settings that explicitly disable remote code trust via --trust-remote-code=False. This security bypass enables remote code execution attacks through malicious model repositories, even when users have explicitly opted out of trusting remote code. The vulnerability affects the core security model of vLLM by ignoring user-specified security preferences. Version 0.18.0 contains patches that address this issue by properly respecting the user's trust_remote_code configuration.
Technical details
Mitigation steps:
Affected products:
vLLM
Related links:
https://nvd.nist.gov/vuln/detail/CVE-2026-27893
https://github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72
https://github.com/vllm-project/vllm/pull/36192
https://github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59
Related CVE's:
Related threat actors:
IOC's:
This article was created with the assistance of AI technology by Perceptive.
