Skip to main content
This guide outlines the major changes between LangChain v1 and previous versions.

Simplified package

The langchain package namespace has been significantly reduced in v1 to focus on essential building blocks for agents. The streamlined package makes it easier to discover and use the core functionality.

Namespace

ModuleWhat’s availableNotes
langchain.agentscreate_agent, AgentStateCore agent creation functionality
langchain.messagesMessage types, content blocks, trim_messagesRe-exported from langchain-core
langchain.tools@tool, BaseTool, injection helpersRe-exported from langchain-core
langchain.chat_modelsinit_chat_model, BaseChatModelUnified model initialization
langchain.embeddingsinit_embeddings, EmbeddingsEmbedding models

langchain-classic

If you were using any of the following from the langchain package, you’ll need to install langchain-classic and update your imports:
  • Legacy chains (LLMChain, ConversationChain, etc.)
  • The indexing API
  • langchain-community re-exports
  • Other deprecated functionality
# For legacy chains
from langchain_classic.chains import LLMChain

# For indexing
from langchain_classic.indexes import ...
Install with:
pip install langchain-classic

Migrate to create_agent

Prior to v1.0, we recommended using langgraph.prebuilt.create_react_agent to build agents. Now, we recommend you use langchain.agents.create_agent to build agents. The table below outlines what functionality has changed from create_react_agent to create_agent:
SectionTL;DR - What’s changed
Import pathPackage moved from langgraph.prebuilt to langchain.agents
PromptsParameter renamed to system_prompt, dynamic prompts use middleware
Pre-model hookReplaced by middleware with before_model method
Post-model hookReplaced by middleware with after_model method
Custom stateTypedDict only, can be defined via state_schema or middleware
ModelDynamic selection via middleware, pre-bound models not supported
ToolsTool error handling moved to middleware with wrap_tool_call
Structured outputprompted output removed, use ToolStrategy/ProviderStrategy
Streaming node nameNode name changed from "agent" to "model"
Runtime contextDependency injection via context argument instead of config["configurable"]
NamespaceStreamlined to focus on agent building blocks, legacy code moved to langchain-classic

Import path

The import path for the agent prebuilt has changed from langgraph.prebuilt to langchain.agents. The name of the function has changed from create_react_agent to create_agent:
from langgraph.prebuilt import create_react_agent 
from langchain.agents import create_agent 
For more information, see Agents.

Prompts

Static prompt rename

The prompt parameter has been renamed to system_prompt:
from langchain.agents import create_agent

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[check_weather],
    system_prompt="You are a helpful assistant"
)

SystemMessage to string

If using SystemMessage objects in the system prompt, extract the string content:
from langchain.agents import create_agent

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[check_weather],
    system_prompt="You are a helpful assistant"
)

Dynamic prompts

Dynamic prompts are a core context engineering pattern— they adapt what you tell the model based on the current conversation state. To do this, use the @dynamic_prompt decorator:
from dataclasses import dataclass

from langchain.agents import create_agent
from langchain.agents.middleware import dynamic_prompt, ModelRequest
from langgraph.runtime import Runtime

@dataclass
class Context:  
    user_role: str = "user"

@dynamic_prompt
def dynamic_prompt(request: ModelRequest) -> str:  
    user_role = request.runtime.context.user_role
    base_prompt = "You are a helpful assistant."

    if user_role == "expert":
        prompt = (
            f"{base_prompt} Provide detailed technical responses."
        )
    elif user_role == "beginner":
        prompt = (
            f"{base_prompt} Explain concepts simply and avoid jargon."
        )
    else:
        prompt = base_prompt

    return prompt  

agent = create_agent(
    model="openai:gpt-4o",
    tools=tools,
    middleware=[dynamic_prompt],  
    context_schema=Context
)

# Use with context
agent.invoke(
    {"messages": [{"role": "user", "content": "Explain async programming"}]},
    context=Context(user_role="expert")
)

Pre-model hook

Pre-model hooks are now implemented as middleware with the before_model method. This new pattern is more extensible—you can define multiple middlewares to run before the model is called, reusing common patterns across different agents. Common use cases include:
  • Summarizing conversation history
  • Trimming messages
  • Input guardrails, like PII redaction
v1 now has summarization middleware as a built in option:
from langchain.agents import create_agent
from langchain.agents.middleware import SummarizationMiddleware

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=tools,
    middleware=[
        SummarizationMiddleware(  
            model="anthropic:claude-sonnet-4-5-20250929",  
            max_tokens_before_summary=1000
        )  
    ]  
)

Post-model hook

Post-model hooks are now implemented as middleware with the after_model method. This new pattern is more extensible—you can define multiple middlewares to run after the model is called, reusing common patterns across different agents. Common use cases include: v1 has a built in middleware for human in the loop approval for tool calls:
from langchain.agents import create_agent
from langchain.agents.middleware import HumanInTheLoopMiddleware

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[read_email, send_email],
    middleware=[HumanInTheLoopMiddleware(
        interrupt_on={
            "send_email": True,
            "description": "Please review this email before sending"
        },
    )]
)

Custom state

Custom state extends the default agent state with additional fields. You can define custom state in two ways:
  1. Via state_schema on create_agent - Best for state used in tools
  2. Via middleware - Best for state managed by specific middleware hooks and tools attached to said middleware
Defining custom state via middleware is preferred over defining it via state_schema on create_agent because it allows you to keep state extensions conceptually scoped to the relevant middleware and tools.state_schema is still supported for backwards compatibility on create_agent.

Defining state via state_schema

Use the state_schema parameter when your custom state needs to be accessed by tools:
from langchain.tools import tool, ToolRuntime
from langchain.agents import create_agent, AgentState  

# Define custom state extending AgentState
class CustomState(AgentState):
    user_name: str

@tool
def greet(
    runtime: ToolRuntime[CustomState]
) -> str:
    """Use this to greet the user by name."""
    user_name = runtime.state.get("user_name", "Unknown")  
    return f"Hello {user_name}!"

agent = create_agent(  
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[greet],
    state_schema=CustomState  
)

Defining state via middleware

Middleware can also define custom state by setting the state_schema attribute. This helps to keep state extensions conceptually scoped to the relevant middleware and tools.
from langchain.agents.middleware import AgentState, AgentMiddleware
from typing_extensions import NotRequired
from typing import Any

class CustomState(AgentState):
    model_call_count: NotRequired[int]

class CallCounterMiddleware(AgentMiddleware[CustomState]):
    state_schema = CustomState  

    def before_model(self, state: CustomState, runtime) -> dict[str, Any] | None:
        count = state.get("model_call_count", 0)
        if count > 10:
            return {"jump_to": "end"}
        return None

    def after_model(self, state: CustomState, runtime) -> dict[str, Any] | None:
        return {"model_call_count": state.get("model_call_count", 0) + 1}

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[...],
    middleware=[CallCounterMiddleware()]  
)
See the middleware documentation for more details on defining custom state via middleware.

State type restrictions

create_agent only supports TypedDict for state schemas. Pydantic models and dataclasses are no longer supported.
from langchain.agents import AgentState, create_agent

# AgentState is a TypedDict
class CustomAgentState(AgentState):  
    user_id: str

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=tools,
    state_schema=CustomAgentState  
)
Simply inherit from langchain.agents.AgentState instead of BaseModel or decorating with dataclass. If you need to perform validation, handle it in middleware hooks instead.

Model

Dynamic model selection allows you to choose different models based on runtime context (e.g., task complexity, cost constraints, or user preferences). create_react_agent released in v0.6 of langgraph-prebuilt supported dynamic model and tool selection via a callable passed to the model parameter. This functionality has been ported to the middleware interface in v1.

Dynamic model selection

from langchain.agents import create_agent
from langchain.agents.middleware import (
    AgentMiddleware, ModelRequest, ModelRequestHandler
)
from langchain.messages import AIMessage
from langchain_openai import ChatOpenAI

basic_model = ChatOpenAI(model="gpt-5-nano")
advanced_model = ChatOpenAI(model="gpt-5")

class DynamicModelMiddleware(AgentMiddleware):

    def wrap_model_call(self, request: ModelRequest, handler: ModelRequestHandler) -> AIMessage:
        if len(request.state.messages) > self.messages_threshold:
            model = advanced_model
        else:
            model = basic_model

        return handler(request.replace(model=model))

    def __init__(self, messages_threshold: int) -> None:
        self.messages_threshold = messages_threshold

agent = create_agent(
    model=basic_model,
    tools=tools,
    middleware=[DynamicModelMiddleware(messages_threshold=10)]
)

Pre-bound models

To better support structured output, create_agent no longer accepts pre-bound models with tools or configuration:
# No longer supported
model_with_tools = ChatOpenAI().bind_tools([some_tool])
agent = create_agent(model_with_tools, tools=[])

# Use instead
agent = create_agent("openai:gpt-4o-mini", tools=[some_tool])
Dynamic model functions can return pre-bound models if structured output is not used.

Tools

The tools argument to create_agent accepts a list of:
  • LangChain BaseTool instances (functions decorated with @tool)
  • Callable objects (functions) with proper type hints and a docstring
  • dict that represents a built-in provider tools
The argument will no longer accept ToolNode instances.
from langchain.agents import create_agent

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[check_weather, search_web]
)

Handling tool errors

You can now configure the handling of tool errors with middleware implementing the wrap_tool_call method.
# Example coming soon

Structured output

Node changes

Structured output used to be generated in a separate node from the main agent. This is no longer the case. We generate structured output in the main loop, reducing cost and latency.

Tool and provider strategies

In v1, there are two new structured output strategies:
  • ToolStrategy uses artificial tool calling to generate structured output
  • ProviderStrategy uses provider-native structured output generation
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy, ProviderStrategy
from pydantic import BaseModel

class OutputSchema(BaseModel):
    summary: str
    sentiment: str

# Using ToolStrategy
agent = create_agent(
    model="openai:gpt-4o-mini",
    tools=tools,
    # explicitly using tool strategy
    response_format=ToolStrategy(OutputSchema)  
)

Prompted output removed

Prompted output is no longer supported via the response_format argument. Compared to strategies like artificial tool calling and provider native structured output, prompted output has not proven to be particularly reliable.

Streaming node name rename

When streaming events from agents, the node name has changed from "agent" to "model" to better reflect the node’s purpose.

Runtime context

When you invoke an agent, it’s often the case that you want to pass two types of data:
  • Dynamic state that changes throughout the conversation (e.g., message history)
  • Static context that doesn’t change during the conversation (e.g., user metadata)
In v1, static context is supported by setting the context parameter to invoke and stream.
from dataclasses import dataclass

from langchain.agents import create_agent

@dataclass
class Context:
    user_id: str
    session_id: str

agent = create_agent(
    model=model,
    tools=tools,
    context_schema=ContextSchema  
)

result = agent.invoke(
    {"messages": [{"role": "user", "content": "Hello"}]},
    context=Context(user_id="123", session_id="abc")  
)
The old config["configurable"] pattern still works for backward compatibility, but using the new context parameter is recommended for new applications or applications migrating to v1.

Standard content

In v1, messages gain provider-agnostic standard content blocks. Access them via @[message.content_blocks][content_blocks] for a consistent, typed view across providers. The existing message.content field remains unchanged for strings or provider-native structures.

What changed

  • New content_blocks property on messages for normalized content
  • Standardized block shapes, documented in Messages
  • Optional serialization of standard blocks into content via LC_OUTPUT_VERSION=v1 or output_version="v1"

Read standardized content

from langchain.chat_models import init_chat_model

model = init_chat_model("openai:gpt-5-nano")
response = model.invoke("Explain AI")

for block in response.content_blocks:
    if block["type"] == "reasoning":
        print(block.get("reasoning"))
    elif block["type"] == "text":
        print(block.get("text"))

Create multimodal messages

from langchain.messages import HumanMessage

message = HumanMessage(content_blocks=[
    {"type": "text", "text": "Describe this image."},
    {"type": "image", "url": "https://example.com/image.jpg"},
])
res = model.invoke([message])

Example block shapes

# Text block
text_block = {
    "type": "text",
    "text": "Hello world",
}

# Image block
image_block = {
    "type": "image",
    "url": "https://example.com/image.png",
    "mime_type": "image/png",
}
See the content blocks reference for more details.

Serialize standard content

Standard content blocks are not serialized into the content attribute by default. If you need to access standard content blocks in the content attribute (e.g., when sending messages to a client), you can opt-in to serializing them into content.
export LC_OUTPUT_VERSION=v1

Simplified package

The langchain package namespace has been significantly reduced in v1 to focus on essential building blocks for agents. The streamlined package makes it easier to discover and use the core functionality.

Namespace

ModuleWhat’s availableNotes
langchain.agentscreate_agent, AgentStateCore agent creation functionality
langchain.messagesMessage types, content blocks, trim_messagesRe-exported from langchain-core
langchain.tools@tool, BaseTool, injection helpersRe-exported from langchain-core
langchain.chat_modelsinit_chat_model, BaseChatModelUnified model initialization
langchain.embeddingsinit_embeddings, EmbeddingsEmbedding models

langchain-classic

If you were using any of the following from the langchain package, you’ll need to install langchain-classic and update your imports:
  • Legacy chains (LLMChain, ConversationChain, etc.)
  • The indexing API
  • langchain-community re-exports
  • Other deprecated functionality
# For legacy chains
from langchain_classic.chains import LLMChain

# For indexing
from langchain_classic.indexes import ...
Installation:
uv pip install langchain-classic

Breaking changes

Dropped Python 3.9 support

All LangChain packages now require Python 3.10 or higher. Python 3.9 reaches end of life in October 2025.

Updated return type for chat models

The return type signature for chat model invocation has been fixed from BaseMessage to AIMessage. Custom chat models implementing bind_tools should update their return signature:
def bind_tools(
        ...
    ) -> Runnable[LanguageModelInput, AIMessage]:

Default message format for OpenAI Responses API

When interacting with the Responses API, langchain-openai now defaults to storing response items in message content. To restore previous behavior, set the LC_OUTPUT_VERSION environment variable to v0, or specify output_version="v0" when instantiating ChatOpenAI.
# Enforce previous behavior with output_version flag
model = ChatOpenAI(model="gpt-4o-mini", output_version="v0")

Default max_tokens in langchain-anthropic

The max_tokens parameter in langchain-anthropic now defaults to higher values based on the model chosen, rather than the previous default of 1024. If you relied on the old default, explicitly set max_tokens=1024.

Legacy code moved to langchain-classic

Existing functionality outside the focus of standard interfaces and agents has been moved to the langchain-classic package. See the Simplified namespace section for details on what’s available in the core langchain package and what moved to langchain-classic.

Removal of deprecated APIs

Methods, functions, and other objects that were already deprecated and slated for removal in 1.0 have been deleted. Check the deprecation notices from previous versions for replacement APIs.

.text() is now a property

Use of the .text() method on message objects should drop the parentheses:
# Property access
text = response.text

# Deprecated method call
text = response.text()
Existing usage patterns (i.e., .text()) will continue to function but now emit a warning.
I