Azure OpenAI vs OpenAIAzure OpenAI refers to OpenAI models hosted on the Microsoft Azure platform. OpenAI also provides its own model APIs. To access OpenAI services directly, use the
ChatOpenAI
integration.API ReferenceFor detailed documentation of all features and configuration options, head to the
AzureChatOpenAI
API reference.AzureChatOpenAI
shares the same underlying base implementation as ChatOpenAI
,
which interfaces with OpenAI services directly.This page serves as a quickstart for authenticating and connecting your Azure OpenAI service to a LangChain chat model.Visit the ChatOpenAI
docs for details on available
features, or head to the AzureChatOpenAI
API reference.Overview
Integration details
Class | Package | Serializable | JS/TS Support | Downloads | Latest Version | |
---|---|---|---|---|---|---|
AzureChatOpenAI | langchain-openai | ❌ | beta | ✅ (npm) |
Model features
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ |
Setup
To accessAzureChatOpenAI
models you’ll need to create an Azure account, create a deployment of an Azure OpenAI model, get the name and endpoint for your deployment, get an Azure OpenAI API key, and install the langchain-openai
integration package.
Installation
Credentials
Head to the Azure docs to create your deployment and generate an API key. Once you’ve done this set theAZURE_OPENAI_API_KEY
and AZURE_OPENAI_ENDPOINT
environment variables:
Instantiation
Now we can instantiate our model object and generate chat completions.- Replace
azure_deployment
with the name of your deployment, - You can find the latest supported
api_version
here: learn.microsoft.com/en-us/azure/ai-services/openai/reference.
Invocation
Specifying model version
Azure OpenAI responses containmodel_name
response metadata property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the specific version of the model, which is set on the deployment in Azure. e.g. it does not distinguish between gpt-35-turbo-0125
and gpt-35-turbo-0301
. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with OpenAICallbackHandler
.
To solve this problem, you can pass model_version
parameter to AzureChatOpenAI
class, which will be added to the model name in the llm output. This way you can easily distinguish between different versions of the model.
API reference
For detailed documentation of all features and configuration options, head to theAzureChatOpenAI
API reference.