LangChain
Use LangChain with Codzen for enhanced reliability and model access
Use LangChain with Codzen to access multiple AI models through a unified interface with provider failover, budget controls, and usage visibility.
Why Use Codzen with LangChain?
LangChain provides a standard interface for working with chat models. By connecting LangChain to Codzen, you get:
- Model Flexibility: Access to multiple AI providers and models through a single API
- Provider Failover: Automatic fallback to alternative providers when one is unavailable
- Budget Controls: Set spending limits and monitor usage across your organization
- Usage Visibility: Track all API calls and costs in your Codzen dashboard
Quick Start
This guide will get you running LangChain with Codzen in just a few minutes.
Prerequisites
- A Codzen API key
- LangChain installed in your project
TypeScript
Install the required packages:
npm install @langchain/openai @langchain/coreConfigure LangChain to use Codzen:
import { HumanMessage } from '@langchain/core/messages';
import { ChatOpenAI } from '@langchain/openai';
const chat = new ChatOpenAI(
configuration: {
baseURL: 'https://codzen.ai/v1',
apiKey: '<CODZEN_API_KEY>',
},
model: 'gpt-5-mini',
);
// Example usage
const response = await model.invoke([
new HumanMessage('What is the meaning of life?'),
]);
console.log('Response:', response.content);Python
Install the required packages:
pip install pydantic langchain-openaiConfigure LangChain to use Codzen:
from langchain_openai import ChatOpenAI
from pydantic import SecretStr
model = ChatOpenAI(
base_url="https://codzen.ai/v1",
api_key=SecretStr("<CODZEN_API_KEY>"),
model="gpt-5-mini",
)
# Example usage
response = model.invoke("What is the meaning of life?")
print(response.content)Configuration Options
| Parameter | Description |
|---|---|
model | The model identifier (e.g., claude-3.5-sonnet, gpt-4o) |
apiKey / api_key | Your Codzen API key |
baseURL / base_url | Set to https://codzen.ai/v1 |
Available Models
Codzen provides access to models from multiple providers. Visit the Models page to see all available models and their capabilities.
Resources
Troubleshooting
- Model Not Found: Check that the model identifier is correct.
- Rate Limits: If you encounter rate limits, consider implementing retry logic or contact support to increase your limits.