Skip to content

OpenAI

The OpenAI provider lives in crates/lime-provider-openai/ and uses the Responses API with optional WebSocket streaming.

Authentication

Terminal window
lime login --with-api-key # OpenAI is the default provider

Or via environment:

Terminal window
export OPENAI_API_KEY="sk-…"
export OPENAI_BASE_URL="https://api.openai.com/v1" # optional override

lime login writes credentials under ~/.lime/; stored values override the env vars when present.

Built-in catalog

SlugDisplayContextMax outputCapabilities
gpt-5.5GPT-5.51M90 KVision, parallel tools, reasoning summaries, param_preset: reasoning_and_verbosity
gpt-5.4GPT-5.4 (default)1M90 KVision, parallel tools, reasoning summaries, param_preset: reasoning_and_verbosity
gpt-5.4-proGPT-5.4 Pro1M90 KVision, parallel tools, reasoning summaries, param_preset: reasoning_and_verbosity
gpt-5.4-miniGPT-5.4 Mini400 K64 KVision, parallel tools, reasoning summaries, param_preset: reasoning_and_verbosity
gpt-5.4-nanoGPT-5.4 Nano400 K32 KVision, parallel tools, param_preset: verbosity_only
o3o3200 K100 KVision, parallel tools, reasoning summaries, param_preset: reasoning_and_verbosity
o4-minio4-mini200 K100 KVision, parallel tools, reasoning summaries, param_preset: reasoning_and_verbosity
o1o1200 K100 KVision, no parallel tools, reasoning summaries, param_preset: reasoning_only
gpt-4oGPT-4o128 K16.4 KVision, parallel tools, param_preset: verbosity_only

Aliases

The provider accepts a small set of friendly aliases that resolve to the canonical slugs above:

AliasResolves to
gpt4gpt-5.4
minigpt-5.4-mini
o3-minio3

What’s special

  • Reasoning carries across turns. ZDR-encrypted reasoning items (encrypted_content) are captured, persisted in the session log, and echoed back on subsequent iterations so the model’s thinking continues across the conversation.
  • WebSocket streaming. When supported, Lime upgrades to the WebSocket variant of the Responses API for lower latency.
  • --reasoning-effort maps directly to the API field. Acceptable values: none, low, medium, high, xhigh.
  • --service-tier forwards to the API. Use priority for priority routing on supported plans.

Common usage

Terminal window
lime --model gpt-5.4-mini --reasoning-effort high
lime --model o3 --service-tier priority
lime --model gpt-5.4 --output-schema ./schema.json

Custom OpenAI deployments

For Azure, an OpenAI proxy, or any drop-in compatible service that implements the Responses API, set OPENAI_BASE_URL:

Terminal window
export OPENAI_API_KEY="<your-key>"
export OPENAI_BASE_URL="https://my-openai-proxy.example.com/v1"

For services that implement only the Chat Completions API, use the OpenAI-compatible provider instead.