Skip to main content
Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the v0 LangChain Python or LangChain JavaScript docs.
LangChain v1 is a focused, production-ready foundation for building agents. We’ve streamlined the framework around three core improvements: To upgrade,
pip install --pre -U langchain
For a complete list of changes, see the migration guide.

create_agent

create_agent is the standard way to build agents in LangChain 1.0. It provides a simpler interface than langgraph.prebuilt.create_react_agent while offering greater customization potential by using middleware.
from langchain.agents import create_agent

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[search_web, analyze_data, send_email],
    system_prompt="You are a helpful research assistant."
)

result = agent.invoke({
    "messages": [
        {"role": "user", "content": "Research AI safety trends"}
    ]
})
Under the hood, create_agent is built on the basic agent loop — calling a model, letting it choose tools to execute, and then finishing when it calls no more tools:
Core agent loop diagram
For more information, see Agents.

Middleware

Middleware is the defining feature of create_agent. It offers a highly customizable entry-point, raising the ceiling for what you can build. Great agents require context engineering: getting the right information to the model at the right time. Middleware helps you control dynamic prompts, conversation summarization, selective tool access, state management, and guardrails through a composable abstraction.

Prebuilt middleware

LangChain provides a few prebuilt middlewares for common patterns, including:
from langchain.agents import create_agent
from langchain.agents.middleware import (
    PIIMiddleware,
    SummarizationMiddleware,
    HumanInTheLoopMiddleware
)

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[read_email, send_email],
    middleware=[
        PIIMiddleware(patterns=["email", "phone", "ssn"]),
        SummarizationMiddleware(
            model="anthropic:claude-sonnet-4-5-20250929",
            max_tokens_before_summary=500
        ),
        HumanInTheLoopMiddleware(
            interrupt_on={
                "send_email": {
                    "allowed_decisions": ["approve", "edit", "reject"]
                }
            }
        ),
    ]
)

Custom middleware

You can also build custom middleware to fit your needs. Middleware exposes hooks at each step in an agent’s execution:
Middleware flow diagram
Build custom middleware by implementing any of these hooks on a subclass of the AgentMiddleware class:
HookWhen it runsUse cases
before_agentBefore calling the agentLoad memory, validate input
before_modelBefore each LLM callUpdate prompts, trim messages
wrap_model_callAround each LLM callIntercept and modify requests/responses
wrap_tool_callAround each tool callIntercept and modify tool execution
after_modelAfter each LLM responseValidate output, apply guardrails
after_agentAfter agent completesSave results, cleanup
Example custom middleware:
from dataclasses import dataclass

from langchain.agents.middleware import (
    AgentMiddleware,
    ModelRequest,
    ModelRequestHandler
)
from langchain.messages import AIMessage

@dataclass
class Context:
    user_expertise: str = "beginner"

class ExpertiseBasedToolMiddleware(Middleware):
    def wrap_model_call(
        self,
        request: ModelRequest,
        handler: ModelRequestHandler
    ) -> AIMessage:
        user_level = request.runtime.context.user_expertise

        if user_level == "expert":
            # More powerful model
            model = "openai:gpt-5"
            tools = [advanced_search, data_analysis]
        else:
            # Less powerful model
            model = "openai:gpt-5-nano"
            tools = [simple_search, basic_calculator]

        return handler(
            request.replace(model=model, tools=tools)
        )

agent = create_agent(
    model="anthropic:claude-sonnet-4-5-20250929",
    tools=[
        simple_search,
        advanced_search,
        basic_calculator,
        data_analysis
    ],
    middleware=[ExpertiseBasedToolMiddleware()],
    context_schema=Context
)
For more information, see the complete middleware guide.

Built on LangGraph

Because create_agent is built on LangGraph, you automatically get built in support for long running and reliable agents via:

Persistence

Conversations automatically persist across sessions with built-in checkpointing

Streaming

Stream tokens, tool calls, and reasoning traces in real-time

Human-in-the-loop

Pause agent execution for human approval before sensitive actions

Time travel

Rewind conversations to any point and explore alternate paths and prompts
You don’t need to learn LangGraph to use these features—they work out of the box.

Structured output

create_agent has improved structured output generation:
  • Main loop integration: Structured output is now generated in the main loop instead of requiring an additional LLM call
  • Structured output strategy: Models can choose between calling tools or using provider-side structured output generation
  • Cost reduction: Eliminates extra expense from additional LLM calls
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy
from pydantic import BaseModel

class Weather(BaseModel):
    temperature: float
    condition: str

def weather_tool(city: str) -> str:
    """Get the weather for a city."""
    return f"it's sunny and 70 degrees in {city}"

agent = create_agent(
    "openai:gpt-4o-mini",
    tools=[weather_tool],
    response_format=ToolStrategy(Weather)
)

result = agent.invoke({
    "messages": [{"role": "user", "content": "What's the weather in SF?"}]
})

print(repr(result["structured_response"]))
# results in `Weather(temperature=70.0, condition='sunny')`
Error handling: Control error handling via the handle_errors parameter to ToolStrategy:
  • Parsing errors: Model generates data that doesn’t match desired structure
  • Multiple tool calls: Model generates 2+ tool calls for structured output schemas

Standard content blocks

Content block support is currently only available for the following integrations:Broader support for content blocks will be rolled out gradually across more providers.
The new content_blocks property introduces a standard representation for message content that works across providers:
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(model="claude-sonnet-4-5-20250929")
response = model.invoke("What's the capital of France?")

# Unified access to content blocks
for block in response.content_blocks:
    if block["type"] == "reasoning":
        print(f"Model reasoning: {block['reasoning']}")
    elif block["type"] == "text":
        print(f"Response: {block['text']}")
    elif block["type"] == "tool_call":
        print(f"Tool call: {block['name']}({block['args']})")

Benefits

  • Provider agnostic: Access reasoning traces, citations, built-in tools (web search, code interpreters, etc.), and other features using the same API regardless of provider
  • Type safe: Full type hints for all content block types
  • Backward compatible: Standard content can be loaded lazily, so there are no associated breaking changes
For more information, see our guide on content blocks.

Simplified package

LangChain v1 streamlines the langchain package namespace to focus on essential building blocks for agents. The refined namespace exposes the most useful and relevant functionality:

Namespace

ModuleWhat’s availableNotes
langchain.agentscreate_agent, AgentStateCore agent creation functionality
langchain.messagesMessage types, content blocks, trim_messagesRe-exported from @[langchain-core]
langchain.tools@tool, BaseTool, injection helpersRe-exported from @[langchain-core]
langchain.chat_modelsinit_chat_model, BaseChatModelUnified model initialization
langchain.embeddingsEmbeddings, init_embeddingsEmbedding models
Most of these are re-exported from langchain-core for convenience, which gives you a focused API surface for building agents.
# Agent building
from langchain.agents import create_agent

# Messages and content
from langchain.messages import AIMessage, HumanMessage

# Tools
from langchain.tools import tool

# Model initialization
from langchain.chat_models import init_chat_model
from langchain.embeddings import init_embeddings

langchain-classic

Legacy functionality has moved to langchain-classic to keep the core packages lean and focused.

What’s in langchain-classic

  • Legacy chains and chain implementations
  • The indexing API
  • langchain-community exports
  • Other deprecated functionality
If you use any of this functionality, install langchain-classic:
pip install langchain-classic
Then update your imports:
from langchain import ...
from langchain_classic import ...

from langchain.chains import ...
from langchain_classic.chains import ...

Migration guide

See our migration guide for help updating your code to LangChain v1.

Reporting issues

Please report any issues discovered with 1.0 on GitHub using the 'v1' label.

Additional resources

See also


I