top of page
perceptive_background_267k.jpg

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files…

Published:

26 maart 2026 om 23:00:00

Alert date:

27 maart 2026 om 01:02:24

Source:

nvd.nist.gov

Click to open the original link from this advisory

Emerging Technologies, Supply Chain & Dependencies

vLLM, an inference and serving engine for large language models, contains a critical vulnerability in versions 0.10.1 through 0.17.x. The vulnerability stems from two model implementation files that hardcode trust_remote_code=True when loading sub-components, bypassing user security settings that explicitly disable remote code trust via --trust-remote-code=False. This security bypass enables remote code execution attacks through malicious model repositories, even when users have explicitly opted out of trusting remote code. The vulnerability affects the core security model of vLLM by ignoring user-specified security preferences. Version 0.18.0 contains patches that address this issue by properly respecting the user's trust_remote_code configuration.

Technical details

Mitigation steps:

Affected products:

vLLM

Related links:

Related CVE's:

Related threat actors:

IOC's:

This article was created with the assistance of AI technology by Perceptive.

© 2025 by Perceptive Security. All rights reserved.

email: info@perceptivesecurity.com

Deze website toont informatie afkomstig van externe bronnen; Perceptive aanvaardt geen verantwoordelijkheid voor de juistheid, volledigheid of actualiteit van deze informatie.

bottom of page