Search⌘ K
AI Features

Configuring Models

Explore how to configure AI language models in OpenCode, including selecting defaults, using variants, and running local models. Understand the trade-offs of cloud versus local options, manage performance and privacy, and gain flexibility by switching models for different coding tasks.

Up to this point, we’ve been using OpenCode with the default model. But OpenCode supports 75+ language models from various providers, including the ability to run them locally on our own machines. Understanding how to configure and switch between models gives us control over cost, performance, and privacy.

Different models have different strengths. Some excel at code generation, others at reasoning or planning. Some are fast and cheap, others are slower but more capable. Learning how to configure models lets us choose the right tool for each task.

How do we select a model?

The simplest way to change models is to use the /models command in OpenCode. This shows all available models from configured providers and lets us select one interactively.

Type /models and OpenCode displays a list of models we can use. Select one, and OpenCode switches to it immediately for the current session.

This is useful for quick experiments or for trying different models without permanently changing our configuration.

What models work well with OpenCode?

Not all language models are equally good at coding tasks. OpenCode requires models that excel at both code generation and tool calling; the ability to decide which tools to use and when.

Models that work particularly well include:

  • GPT 5.2 and GPT 5.1 Codex from OpenAI

  • Claude Opus 4.5 and Claude Sonnet 4.5 from Anthropic

  • Gemini 3 Pro from Google

  • Minimax M2.1

  • Kimi-K2.5

These models understand code context, can reason about complex requirements, and reliably call tools when needed. Models that struggle with tool calling or code generation will make OpenCode less effective.

The ...