Configuring Models
Explore how to configure AI language models in OpenCode, including selecting defaults, using variants, and running local models. Understand the trade-offs of cloud versus local options, manage performance and privacy, and gain flexibility by switching models for different coding tasks.
We'll cover the following...
- How do we select a model?
- What models work well with OpenCode?
- How do we set a default model?
- How do we configure model options?
- What are model variants?
- How do we run local models?
- How do we set up a local model with LM Studio?
- How do we use Ollama for local models?
- What about other local model options?
- How does OpenCode choose which model to use?
- When should we use local vs. cloud models?
- How do we switch models quickly?
- What’s next?
Up to this point, we’ve been using OpenCode with the default model. But OpenCode supports 75+ language models from various providers, including the ability to run them locally on our own machines. Understanding how to configure and switch between models gives us control over cost, performance, and privacy.
Different models have different strengths. Some excel at code generation, others at reasoning or planning. Some are fast and cheap, others are slower but more capable. Learning how to configure models lets us choose the right tool for each task.
How do we select a model?
The simplest way to change models is to use the /models command in OpenCode. This shows all available models from configured providers and lets us select one interactively.
Type /models and OpenCode displays a list of models we can use. Select one, and OpenCode switches to it immediately for the current session.
This is useful for quick experiments or for trying different models without permanently changing our configuration.
What models work well with OpenCode?
Not all language models are equally good at coding tasks. OpenCode requires models that excel at both code generation and tool calling; the ability to decide which tools to use and when.
Models that work particularly well include:
GPT 5.2 and GPT 5.1 Codex from OpenAI
Claude Opus 4.5 and Claude Sonnet 4.5 from Anthropic
Gemini 3 Pro from Google
Minimax M2.1
Kimi-K2.5
These models understand code context, can reason about complex requirements, and reliably call tools when needed. Models that struggle with tool calling or code generation will make OpenCode less effective.
The ...