Skip to main content
notewise supports a wide range of LLM providers via LiteLLM. You can use any model LiteLLM supports by setting the --model flag or the DEFAULT_MODEL config key.

Model string format

provider/model-name
gemini/gemini-2.5-flash
openai/gpt-4o
anthropic/claude-3-5-sonnet-20241022
groq/llama3-70b-8192
Some models can be specified without a provider prefix when the name is unambiguous:
gpt-4o                        # routes to OpenAI
claude-3-5-sonnet-20241022    # routes to Anthropic

Supported providers

ProviderConfig KeyDefault Model
Google GeminiGEMINI_API_KEYgemini/gemini-2.5-flash
OpenAIOPENAI_API_KEYopenai/gpt-4o
AnthropicANTHROPIC_API_KEYanthropic/claude-3-5-sonnet-20241022
GroqGROQ_API_KEYgroq/llama3-70b-8192
xAIXAI_API_KEYxai/grok-4
MistralMISTRAL_API_KEYmistral/mistral-large-latest
CohereCOHERE_API_KEYcohere/command-r-plus
DeepSeekDEEPSEEK_API_KEYdeepseek/deepseek-chat
Gemini is the default provider — no billing required to start. Get your free API key at AI Studio.

Provider routing

If the model string contains /, the prefix before the slash is the provider. Otherwise, heuristics based on model name prefixes are used.
Model PrefixProviderRequired Env Var
gemini/*Google GeminiGEMINI_API_KEY
vertex/*, vertex_ai/*Google VertexGEMINI_API_KEY
openai/*, gpt*, o1*, o3*, o4*OpenAIOPENAI_API_KEY
anthropic/*, claude*AnthropicANTHROPIC_API_KEY
groq/*GroqGROQ_API_KEY
xai/*, grok*xAIXAI_API_KEY
mistral/*MistralMISTRAL_API_KEY
cohere/*, command*CohereCOHERE_API_KEY
deepseek/*DeepSeekDEEPSEEK_API_KEY
If the API key for the resolved provider is missing, the pipeline logs an error and the video fails.
Unsupported gateway prefixes (azure, openrouter, vercel_ai_gateway) return None for the API key — these are not natively supported and must be configured via environment variables separately.

Quickstart: Changing the default model

Edit ~/.notewise/config.env:
DEFAULT_MODEL=gpt-4o
OPENAI_API_KEY=sk-...
Or override for a single run:
notewise process "URL" --model gpt-4o

Usage tracking

After each video, notewise records token usage and estimated cost in SQLite.
notewise stats                                       # all-time totals
notewise stats --model gemini/gemini-2.5-flash       # filter by model
notewise stats --since 7d                            # last 7 days
Cost estimates come from LiteLLM’s completion_cost() and are approximate.
See also: Configuration reference for all model and API key options.
Last modified on March 28, 2026