Your AI Models. Managed.

The open-source dashboard for managing LLM providers.
Auto-detect, configure, and optimize your AI stack in one place.

npm i -g ondeckllm
View on GitHub

Works with your stack

OpenAI
Anthropic
Google
Ollama
Groq
Mistral
DeepSeek

How It Works

From zero to optimized in three steps.

1

Install

npm i -g ondeckllm && ondeckllm

One command. Opens a local dashboard on port 3900.

2

Auto-Detect

Finds your existing API keys, Ollama models, and OpenClaw config automatically.

3

Optimize

Set batting order, profiles, and fallbacks for every task type.

Features

Everything you need to manage your AI lineup.

Provider Hub

Manage all your LLM API keys in one place. One-click validation, balance checks, and status indicators.

Batting Order

Drag-and-drop model priority per task type. Set your starting lineup, pinch hitters, and bullpen.

Smart Profiles

Budget, Quality First, Local Only, Privacy Mode, Speed Demon. One-click switching.

Config Sync

Reads and writes your OpenClaw config. Atomic writes with automatic rollback on failure.

Ollama Wizard

One-click local model setup. Browse, pull, and configure models with guided starter packs.

Works with CloakClaw

Privacy proxy integration. Enable Privacy Mode and all cloud calls route through CloakClaw automatically.

Stop editing JSON configs.
Start managing your AI lineup.

Works With

Part of the Canonflip ecosystem.

OpenClaw

Direct config sync. Changes in OnDeckLLM reflect instantly in your running OpenClaw agent.

openclaw.com →

CloakClaw

Privacy proxy for your LLM calls. Strip PII before it hits the cloud. Automatic integration.

cloakclaw.com →