OpenCode is an open source, terminal-based AI coding assistant – think Claude Code or Cursor but you can hook it up to your own models. By default it uses cloud providers, but the interesting bit is pairing it with LM Studio to run everything locally.

install opencode

curl -fsSL https://opencode.ai/install | bash
opencode --version

Config lives in ~/.config/opencode/.

point it at LM Studio

Fire up LM Studio and load a code-focused model – something like Qwen3 Coder 30B works well if your hardware can handle it. Make sure the local server is running, by default it listens on http://localhost:1234.

Then edit ~/.config/opencode/opencode.json:

{
    "$schema": "https://opencode.ai/config.json",
    "provider": {
        "lmstudio": {
            "npm": "@ai-sdk/openai-compatible",
            "name": "LM Studio (local)",
            "options": {
                "baseURL": "http://localhost:1234/v1"
            },
            "models": {
                "zai-org/glm-4.7-flash": {
                    "name": "glm-4.7-flash"
                }
            }
        }
    }
}

The key bit is @ai-sdk/openai-compatible – LM Studio exposes an OpenAI-compatible API, so this just works.

Launch opencode and run /model to verify. You should see “LM Studio (local)” as an available provider.

why this matters

The nice thing about running everything locally is that your code never leaves your machine, there are no token limits, no API costs, and it works offline. The tradeoff is output quality – local models are getting better fast but still lag behind frontier cloud models for complex reasoning. For code review, refactoring suggestions, and boilerplate generation though, they work surprisingly well.

references