Universal LLM ClientOne API. Every Provider.
Transparent failover, structured output, streaming, tool execution, and observability — across OpenAI, Google Gemini, Ollama, and any OpenAI-compatible service.
Transparent failover, structured output, streaming, tool execution, and observability — across OpenAI, Google Gemini, Ollama, and any OpenAI-compatible service.
Works With
One interface across local inference, cloud APIs, and gateway services. Add a provider, set a priority — failover is automatic.
Core Features
Everything you need to ship production AI without stitching together 5 different libraries.
See It In Action
Real TypeScript. Real patterns. Copy, paste, ship.
Architecture
Clean layers, zero dependencies, designed as a transport layer for agent frameworks.
bun add universal-llm-client