Skip to content

OpenAI

Install

To use OpenAI models or OpenAI-compatible APIs, you need to either install pydantic-ai, or install pydantic-ai-slim with the openai optional group:

pip install "pydantic-ai-slim[openai]"
uv add "pydantic-ai-slim[openai]"

Configuration

To use OpenAIModel with the OpenAI API, go to platform.openai.com and follow your nose until you find the place to generate an API key.

Environment variable

Once you have the API key, you can set it as an environment variable:

export OPENAI_API_KEY='your-api-key'

You can then use OpenAIModel by name:

from pydantic_ai import Agent

agent = Agent('openai:gpt-4o')
...

Or initialise the model directly with just the model name:

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel

model = OpenAIModel('gpt-4o')
agent = Agent(model)
...

By default, the OpenAIModel uses the OpenAIProvider with the base_url set to https://api.openai.com/v1.

Configure the provider

If you want to pass parameters in code to the provider, you can programmatically instantiate the OpenAIProvider and pass it to the model:

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.openai import OpenAIProvider

model = OpenAIModel('gpt-4o', provider=OpenAIProvider(api_key='your-api-key'))
agent = Agent(model)
...

Custom OpenAI Client

OpenAIProvider also accepts a custom AsyncOpenAI client via the openai_client parameter, so you can customise the organization, project, base_url etc. as defined in the OpenAI API docs.

You could also use the AsyncAzureOpenAI client to use the Azure OpenAI API.

from openai import AsyncAzureOpenAI

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.openai import OpenAIProvider

client = AsyncAzureOpenAI(
    azure_endpoint='...',
    api_version='2024-07-01-preview',
    api_key='your-api-key',
)

model = OpenAIModel(
    'gpt-4o',
    provider=OpenAIProvider(openai_client=client),
)
agent = Agent(model)
...

OpenAI Responses API

PydanticAI also supports OpenAI's Responses API through the OpenAIResponsesModel class.

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIResponsesModel

model = OpenAIResponsesModel('gpt-4o')
agent = Agent(model)
...

The Responses API has built-in tools that you can use instead of building your own:

  • Web search: allow models to search the web for the latest information before generating a response.
  • File search: allow models to search your files for relevant information before generating a response.
  • Computer use: allow models to use a computer to perform tasks on your behalf.

You can use the OpenAIResponsesModelSettings class to make use of those built-in tools:

from openai.types.responses import WebSearchToolParam  # (1)!

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIResponsesModel, OpenAIResponsesModelSettings

model_settings = OpenAIResponsesModelSettings(
    openai_builtin_tools=[WebSearchToolParam(type='web_search_preview')],
)
model = OpenAIResponsesModel('gpt-4o')
agent = Agent(model=model, model_settings=model_settings)

result = agent.run_sync('What is the weather in Tokyo?')
print(result.output)
"""
As of 7:48 AM on Wednesday, April 2, 2025, in Tokyo, Japan, the weather is cloudy with a temperature of 53°F (12°C).
"""
  1. The file search tool and computer use tool can also be imported from openai.types.responses.

You can learn more about the differences between the Responses API and Chat Completions API in the OpenAI API docs.

OpenAI-compatible Models

Many providers and models are compatible with the OpenAI API, and can be used with OpenAIModel in PydanticAI. Before getting started, check the installation and configuration instructions above.

To use another OpenAI-compatible API, you can make use of the base_url and api_key arguments from OpenAIProvider:

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.openai import OpenAIProvider

model = OpenAIModel(
    'model_name',
    provider=OpenAIProvider(
        base_url='https://<openai-compatible-api-endpoint>.com', api_key='your-api-key'
    ),
)
agent = Agent(model)
...

Various providers also have their own provider classes so that you don't need to specify the base URL yourself and you can use the standard <PROVIDER>_API_KEY environment variable to set the API key. When a provider has its own provider class, you can use the Agent("<provider>:<model>") shorthand, e.g. Agent("deepseek:deepseek-chat") or Agent("openrouter:google/gemini-2.5-pro-preview"), instead of building the OpenAIModel explicitly. Similarly, you can pass the provider name as a string to the provider argument on OpenAIModel instead of building instantiating the provider class explicitly.

Model Profile

Sometimes, the provider or model you're using will have slightly different requirements than OpenAI's API or models, like having different restrictions on JSON schemas for tool definitions, or not supporting tool definitions to be marked as strict.

When using an alternative provider class provided by PydanticAI, an appropriate model profile is typically selected automatically based on the model name. If the model you're using is not working correctly out of the box, you can tweak various aspects of how model requests are constructed by providing your own ModelProfile (for behaviors shared among all model classes) or OpenAIModelProfile (for behaviors specific to OpenAIModel):

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.profiles._json_schema import InlineDefsJsonSchemaTransformer
from pydantic_ai.profiles.openai import OpenAIModelProfile
from pydantic_ai.providers.openai import OpenAIProvider

model = OpenAIModel(
    'model_name',
    provider=OpenAIProvider(
        base_url='https://<openai-compatible-api-endpoint>.com', api_key='your-api-key'
    ),
    profile=OpenAIModelProfile(
        json_schema_transformer=InlineDefsJsonSchemaTransformer,  # Supported by any model class on a plain ModelProfile
        openai_supports_strict_tool_definition=False  # Supported by OpenAIModel only, requires OpenAIModelProfile
    )
)
agent = Agent(model)

DeepSeek

To use the DeepSeek provider, first create an API key by following the Quick Start guide. Once you have the API key, you can use it with the DeepSeekProvider:

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.deepseek import DeepSeekProvider

model = OpenAIModel(
    'deepseek-chat',
    provider=DeepSeekProvider(api_key='your-deepseek-api-key'),
)
agent = Agent(model)
...

You can also customize any provider with a custom http_client:

from httpx import AsyncClient

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.deepseek import DeepSeekProvider

custom_http_client = AsyncClient(timeout=30)
model = OpenAIModel(
    'deepseek-chat',
    provider=DeepSeekProvider(
        api_key='your-deepseek-api-key', http_client=custom_http_client
    ),
)
agent = Agent(model)
...

Ollama

To use Ollama, you must first download the Ollama client, and then download a model using the Ollama model library.

You must also ensure the Ollama server is running when trying to make requests to it. For more information, please see the Ollama documentation.

Example local usage

With ollama installed, you can run the server with the model you want to use:

ollama run llama3.2

(this will pull the llama3.2 model if you don't already have it downloaded)

Then run your code, here's a minimal example:

from pydantic import BaseModel

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.openai import OpenAIProvider


class CityLocation(BaseModel):
    city: str
    country: str


ollama_model = OpenAIModel(
    model_name='llama3.2', provider=OpenAIProvider(base_url='http://localhost:11434/v1')
)
agent = Agent(ollama_model, output_type=CityLocation)

result = agent.run_sync('Where were the olympics held in 2012?')
print(result.output)
#> city='London' country='United Kingdom'
print(result.usage())
#> Usage(requests=1, request_tokens=57, response_tokens=8, total_tokens=65)

Example using a remote server

from pydantic import BaseModel

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.openai import OpenAIProvider

ollama_model = OpenAIModel(
    model_name='qwen2.5-coder:7b',  # (1)!
    provider=OpenAIProvider(base_url='http://192.168.1.74:11434/v1'),  # (2)!
)


class CityLocation(BaseModel):
    city: str
    country: str


agent = Agent(model=ollama_model, output_type=CityLocation)

result = agent.run_sync('Where were the olympics held in 2012?')
print(result.output)
#> city='London' country='United Kingdom'
print(result.usage())
#> Usage(requests=1, request_tokens=57, response_tokens=8, total_tokens=65)
  1. The name of the model running on the remote server
  2. The url of the remote server

Azure AI Foundry

If you want to use Azure AI Foundry as your provider, you can do so by using the AzureProvider class.

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.azure import AzureProvider

model = OpenAIModel(
    'gpt-4o',
    provider=AzureProvider(
        azure_endpoint='your-azure-endpoint',
        api_version='your-api-version',
        api_key='your-api-key',
    ),
)
agent = Agent(model)
...

OpenRouter

To use OpenRouter, first create an API key at openrouter.ai/keys.

Once you have the API key, you can use it with the OpenRouterProvider:

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.openrouter import OpenRouterProvider

model = OpenAIModel(
    'anthropic/claude-3.5-sonnet',
    provider=OpenRouterProvider(api_key='your-openrouter-api-key'),
)
agent = Agent(model)
...

Grok (xAI)

Go to xAI API Console and create an API key. Once you have the API key, you can use it with the GrokProvider:

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.grok import GrokProvider

model = OpenAIModel(
    'grok-2-1212',
    provider=GrokProvider(api_key='your-xai-api-key'),
)
agent = Agent(model)
...

Perplexity

Follow the Perplexity getting started guide to create an API key. Then, you can query the Perplexity API with the following:

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.openai import OpenAIProvider

model = OpenAIModel(
    'sonar-pro',
    provider=OpenAIProvider(
        base_url='https://api.perplexity.ai',
        api_key='your-perplexity-api-key',
    ),
)
agent = Agent(model)
...

Fireworks AI

Go to Fireworks.AI and create an API key in your account settings. Once you have the API key, you can use it with the FireworksProvider:

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.fireworks import FireworksProvider

model = OpenAIModel(
    'accounts/fireworks/models/qwq-32b',  # model library available at https://fireworks.ai/models
    provider=FireworksProvider(api_key='your-fireworks-api-key'),
)
agent = Agent(model)
...

Together AI

Go to Together.ai and create an API key in your account settings. Once you have the API key, you can use it with the TogetherProvider:

from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.together import TogetherProvider

model = OpenAIModel(
    'meta-llama/Llama-3.3-70B-Instruct-Turbo-Free',  # model library available at https://www.together.ai/models
    provider=TogetherProvider(api_key='your-together-api-key'),
)
agent = Agent(model)
...