


Perceptive Security
SOC/SIEM Consultancy

Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection
Published:
26 december 2025 om 09:27:00
Alert date:
26 december 2025 om 10:02:33
Source:
thehackernews.com
Supply Chain & Dependencies, Web Technologies
A critical security flaw has been disclosed in LangChain Core that could be exploited by attackers to steal sensitive secrets and influence large language model (LLM) responses through prompt injection. The vulnerability exists in langchain-core, a core Python package that provides core interfaces and model-agnostic abstractions for building LangChain applications. The flaw enables serialization injection attacks that can compromise secrets and manipulate LLM behavior. This represents a significant security risk for applications built on the LangChain ecosystem.
Technical details
A serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions. The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries. The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data. This allows attackers to instantiate unsafe arbitrary objects, potentially leading to secret extraction from environment variables, instantiation of classes within pre-approved trusted namespaces (langchain_core, langchain, langchain_community), and arbitrary code execution via Jinja2 templates. The vulnerability enables injection of LangChain object structures through user-controlled fields like metadata, additional_kwargs, or response_metadata via prompt injection.
Mitigation steps:
Update to patched versions immediately: langchain-core 1.2.5 or 0.3.81, @langchain/core 1.1.8 or 0.3.80, langchain 1.2.3 or 0.3.37. The patch introduces new restrictive defaults in load() and loads() via an allowlist parameter 'allowed_objects' that allows users to specify which classes can be serialized/deserialized. Jinja2 templates are blocked by default, and the 'secrets_from_env' option is now set to False to disable automatic secret loading from the environment.
Affected products:
langchain-core >= 1.0.0
< 1.2.5
langchain-core < 0.3.81
@langchain/core >= 1.0.0
< 1.1.8
@langchain/core < 0.3.80
langchain >= 1.0.0
< 1.2.3
langchain < 0.3.37
Related links:
https://reference.langchain.com/python/langchain_core/
https://pypi.org/project/langchain-core/1.2.5/
https://reference.langchain.com/python/langchain_core/load/
https://github.com/langchain-ai/langchain/security/advisories/GHSA-c67j-w6g6-q2cm
https://cyata.ai/blog/langgrinch-langchain-core-cve-2025-68664/
https://github.com/langchain-ai/langchainjs/security/advisories/GHSA-r399-636x-v7f6
Related CVE's:
Related threat actors:
IOC's:
User-controlled dictionaries containing 'lc' keys, Malicious metadata fields, Malicious additional_kwargs fields, Malicious response_metadata fields
This article was created with the assistance of AI technology by Perceptive.
