Skip to content

LLM Providers

Errand works with any LLM provider that offers an OpenAI-compatible API — which, through LiteLLM, means virtually all of them. This page surveys the major providers and services to help you decide where to get your models.

All of the providers listed here connect to Errand through LiteLLM. You configure your provider credentials and models in LiteLLM once, and then select the models you want to use from within Errand’s settings.

These are the companies that build and host their own AI models. You sign up for an account, get an API key, and pay per use. This is the simplest way to get started — no hardware required, and you get access to the most capable models available.

Anthropic builds the Claude family of models, which are known for strong reasoning, reliable tool use, and careful instruction following. Claude models are a popular choice for agentic workloads like Errand because of their consistency in multi-step tasks.

TierModelNotes
FrontierClaude Opus 4Best reasoning and tool use
BalancedClaude Sonnet 4Excellent all-rounder, great value
EfficientClaude Haiku 4.5Fast and affordable for simple tasks

Strengths: Reliable tool calling, strong instruction following, long context windows, thoughtful safety design.

Sign up at anthropic.com.

OpenAI produces the GPT family of models, the most widely used commercial AI models. The ecosystem is mature, with extensive documentation and broad tool support.

TierModelNotes
FrontierGPT-4.1Latest flagship model
BalancedGPT-4oStrong general-purpose model
EfficientGPT-4o MiniCompact and cost-effective

OpenAI also provides the Whisper speech-to-text API, which is the standard for Errand’s transcription feature.

Strengths: Mature ecosystem, extensive documentation, broad compatibility, Whisper API for transcription.

Sign up at platform.openai.com.

Google’s Gemini models offer very large context windows and competitive performance, particularly in the Balanced and Efficient tiers. Google’s pricing is often attractive, especially for the Flash models.

TierModelNotes
FrontierGemini 2.5 ProLarge context window, strong reasoning
BalancedGemini 2.5 FlashExcellent performance-to-cost ratio
EfficientGemini 2.0 FlashVery fast, very affordable

Strengths: Large context windows, competitive pricing, strong performance in the Balanced tier.

Sign up at ai.google.dev.

xAI’s Grok models are a newer entrant with strong reasoning capabilities. The Grok family has been rapidly improving and is worth considering, particularly if you are looking for alternatives to the established providers.

TierModelNotes
FrontierGrok 3Strong reasoning, large context
BalancedGrok 3 MiniCompact, capable model

Strengths: Competitive reasoning performance, growing model lineup.

Sign up at console.x.ai.

Several other providers offer capable models that work well with Errand through LiteLLM:

  • Mistral — European AI company offering strong open-weight models. Models like Mistral Large sit in the Balanced tier. A good choice if data residency in the EU matters to you. Sign up at mistral.ai.

  • DeepSeek — Offers highly capable models at competitive price points. DeepSeek-R1 is known for strong reasoning. Worth considering for cost-conscious deployments. Sign up at platform.deepseek.com.

  • Cohere — Focuses on enterprise use cases with models designed for search, retrieval, and business applications. Sign up at cohere.com.

LiteLLM supports over 100 providers. If your preferred provider is not listed here, check the LiteLLM documentation to see if it is supported.

These services give you access to models from multiple providers through a single account. They sit between you (or rather, between LiteLLM) and the model providers, handling authentication and billing in one place.

OpenRouter provides access to models from Anthropic, OpenAI, Google, Meta, Mistral, and many others — all through a single API key. It is a good choice if you want to experiment with models from different providers without signing up for separate accounts with each one.

OpenRouter also offers access to open-source and community models that might not be available directly from the original providers.

Best for: Individuals and small teams who want flexibility and easy access to a wide range of models.

Amazon Bedrock is Amazon’s managed AI service. It provides access to models from Anthropic (Claude), Meta (Llama), Mistral, and others, all within the AWS ecosystem. Your data stays within your AWS account and is covered by your existing AWS agreements.

Best for: Teams and organisations already on AWS who need enterprise compliance, billing through their existing AWS account, or data residency guarantees.

Azure OpenAI Service provides access to OpenAI’s GPT models within the Microsoft Azure cloud. Like Bedrock, it is designed for enterprise deployments with existing cloud commitments.

Best for: Teams and organisations already on Azure who need enterprise compliance and want to use GPT models through their existing Azure agreement.

Vertex AI is Google Cloud’s AI platform. It provides access to Gemini models and other Google AI capabilities within the GCP ecosystem.

Best for: Teams and organisations already on Google Cloud who want to use Gemini models through their existing GCP account.

There is no single “best” provider — the right choice depends on your situation:

  • Getting started as an individual? Sign up directly with one of the major cloud providers (Anthropic, OpenAI, or Google) and use their API key in LiteLLM. Alternatively, use OpenRouter for easy access to models from multiple providers with a single account.

  • Working in a team or enterprise? Use the cloud platform your organisation already has a relationship with. If you are on AWS, use Bedrock. On Azure, use Azure OpenAI. On GCP, use Vertex AI. This simplifies billing, compliance, and access management.

  • Want to compare models? Start with OpenRouter or set up multiple providers in LiteLLM. You can easily switch between models to find what works best for your tasks.

  • Privacy or data residency concerns? Consider a European provider like Mistral, a cloud platform with data residency options (Bedrock, Azure, Vertex), or run models locally.

Whichever provider you choose, the experience in Errand is the same — LiteLLM handles the translation between providers, so you can always change your mind later without reconfiguring Errand itself.