Dyad LogoDyad

Maximize AI credits

Get the most out of your Dyad Pro AI credits

If you want to learn how to make the most out of your Dyad Pro AI credits, keep on reading.

For a 30-second summary for optimizing your AI credits, do the following:

How AI credits work

Each time you use AI with your Dyad Pro key, you are using your monthly allotment of AI credits.

AI credits basically boil down to two main things:

Number of tokens x Cost per token = AI credits used

You can think of a token as a unit of text like a word that an AI processes. If you're working on a large codebase, then it will take a lot of tokens for the AI to process the codebase. Likewise, if you're using an expensive model, each token is expensive.

So the key to maximizing your AI credits is to use the most cost-efficient models and minimize the number of tokens you use.

Choose the right model

In this section, we'll cover how to pick the most cost-effective model.

Internally, Dyad Pro has a cost per token for each model, based on the cost AI providers charge us. You can check OpenRouter to look up the cost of various AI models.

The best models have significantly different prices

This means some models like Claude Sonnet 4 are significantly more expensive than other models like Gemini Pro 2.5. For example, Claude Sonnet 4 is roughly twice as expensive as Gemini Pro 2.5 even though both are frontier models with similar levels of performance. For this reason, we recommend using Gemini Pro 2.5 over Claude Sonnet 4 for most use cases.

Open-weight models deliver an excellent value

Under the OpenRouter AI provider, you can find several of the best open-weight AI models like DeepSeek v3, DeepSeek R1, and Kimi K2. These models are almost as good as the leading proprietary AI models but at a fraction of the price.

For example, Kimi K2 is less than half the price of Gemini Pro 2.5 with comparable benchmark performance!

Use smaller models for simple changes

Oftentimes, if you're making a straightforward change like adjusting the UI (e.g. changing colors and layouts), you can use a smaller and cheaper model like Gemini Flash 2.5. Gemini Flash 2.5 delivers ~80% of the performance of its big brother Gemini Pro 2.5 for a quarter of the cost.

Use Smart Auto

If all the guidance above seems a bit overwhelming, don't worry—this is why Dyad Pro offers a simple option: Smart Auto which chooses the best, most cost-efficient model for you automatically.

Optimize your token usage

The biggest factor in your token usage for most projects is codebase size. This is why Dyad Pro provides Smart Context.

Use Pro modes

Smart Context optimizes your input tokens so that only the relevant files in your codebase are sent to the main AI model. Smart Context itself uses a smaller AI model to preprocess your AI request to understand which parts of the codebase are relevant.

Turbo Edits optimizes your output tokens by having the main AI model only write out the changes. This way it doesn't need to waste tokens re-outputting the code that's unmodified. Turbo Edits uses a smaller model optimized for code edits to apply the changes to the existing file.

Both Smart Context and Turbo Edits are enabled by default, so there's nothing you need to do if you haven't disabled them before.

Start new chats

Your chat history also uses up your context window, so we recommend starting a new chat when possible, which makes your interactions with AI more focused and optimized. You can also use the "Summarize chat" suggestion above the chat input box to turn your existing conversation into a short summary and use it as a starting point for a new chat.

Use manual context management

Manual context management allows you to select the files that you want to work on. Even if you're using Smart Context, it can help to use manual context management because Smart Context will default to using the whole codebase when it's not sure which files are relevant for a given AI request. Manual context management is an advanced feature and is only recommended for power users who are familiar with their codebase's structure.

On this page