Skip to content

Docker Install

Docker Compose is the quickest way to run Errand AI on any machine with Docker installed. It sets up all the required services in containers and gets you to a working system in just a few steps.

  • Docker installed and running
  • Docker Compose (included with Docker Desktop, or install separately on Linux)
  • An API key from at least one LLM provider (e.g. Anthropic, OpenAI)
  • (Optional) API keys for an LLM provider providing transcription models (eg. Groq’s whisper-large-v3)
  • (Optional) API keys or credentials for any integrations you want to use (e.g. Google Drive, OneDrive, Slack)

These instructions assume that you will use LiteLLM to manage your connections to LLM providers. If you want to connect Errand directly to an LLM provider without using LiteLLM, see the Advanced Configuration section below.

Terminal window
git clone https://github.com/errand-ai/errand.git
cd errand/deploy
  1. Copy the example environment file:

    Terminal window
    cp .env.example .env
  2. Open .env in a text editor and set the following values:

    VariableDescriptionDefault
    ADMIN_USERNAMEUsername for the admin accountadmin
    ADMIN_PASSWORDPassword for the admin accountchangeme
    CREDENTIAL_ENCRYPTION_KEYEncryption key for stored credentials (see below)
    LITELLM_MASTER_KEYMaster key for LiteLLM proxy authenticationsk-12345678
    OPENAI_BASE_URLBase URL for your LLM provider (or a LiteLLM proxy)http://litellm:4000
    OPENAI_API_KEYYour LLM provider API key
  3. Generate an encryption key by running this command in your terminal:

    Terminal window
    python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"

    Copy the output and paste it as the value for CREDENTIAL_ENCRYPTION_KEY in your .env file.

The easiest way to connect Errand to an LLM provider is through LiteLLM, which acts as a proxy and unified interface for multiple providers. To set this up:

Terminal window
docker compose up litellm

This will start the PostgreSQL and LiteLLM services. You can then access the LiteLLM dashboard at http://localhost:3000 to add your LLM provider API keys and configure models.

The default ‘admin’ password for the LiteLLM dashboard is sk-12345678, but you can change this by setting a different value for LITELLM_MASTER_KEY in your .env file.

There are 2 steps we need to do in the LiteLLM dashboard:

The first step is to add a credential for your LLM provider, then create a model entry that uses that credential.

  1. Go to the “Models + Endpoints” page and select the “LLM Credentials”tab and click “Add Credential”

  2. Choose your provider from the dropdown (e.g. OpenAI, Anthropic, Groq, etc.)

  3. Enter a name for this credential (e.g. “OpenAI Account”) and paste your API key

  4. Click “Add Credential”

  5. Select the “Add Model” tab.

  6. Choose the provider you just added the credential for.

  7. Choose the credential you just created from the dropdown,

  8. then select a model to add (e.g. gpt-4, gemini-2.5-flash, groq-whisper-large-v3, etc.)

  9. Click “Test Connection” to verify that LiteLLM can connect to the provider with the provided API key. You should see a success message if everything is correct.

  10. Click “Add Model” to save.

  11. Copy the “Model Name” value (e.g. gpt-4) and set it as the value for HINDSIGHT_API_LLM_MODEL in your .env file.

Repeat steps 5-10 for any additional models you want to use. For help deciding which models to add, see the AI Models guide.

The second step is to create “virtual keys” that the Errand and Hindsight services will use to access the LLM provider through LiteLLM. This allows you to rotate or change your actual API keys in LiteLLM without needing to update the Errand configuration.

  1. Go to the “Virtual Keys” page in the LiteLLM dashboard.
  2. Click “Create New Key”.
  3. Select “Service Account” as the key owner.
  4. Enter “errand” as the service account ID.
  5. In the Models section, you can either select specific models that this key should have access to, or select “All Team Models”.
  6. Click “Create Key” to generate the virtual key.
  7. Copy the generated virtual key value (it will look like sk-xxxxxx) and paste it into the OPENAI_API_KEY variable in your .env file.

Run the following command from the project directory to start all the remaining services:

Terminal window
docker compose up

Docker Compose will start the following services:

ServicePurpose
PostgreSQLDatabase for tasks, users, and configuration
LiteLLMProxy for connecting to LLM providers
HindsightPersistent memory for AI agents
ValkeyIn-memory cache for real-time coordination
Errand ServerAPI server and web UI (port 8000)
Google Drive MCPFile access for Google Drive integration
OneDrive MCPFile access for OneDrive integration

Wait until you see log messages indicating that the server is ready.

  1. Open your web browser
  2. Navigate to http://localhost:8000
  3. Log in with the admin credentials you set in your .env file (default: admin / changeme)
  4. Select the “Settings” page and the “Task Management” tab.
  5. You should see LiteLLM listed as the LLM provider. Select the model to use for the task description parsing and initial processing.
  6. Select the “Default Model” to use for task execution. This can be the same model or a different one from the one used for task management.
  7. (Optional) If you added a transcription model in LiteLLM, select that model in the “Transcription Model” dropdown to enable audio transcription capabilities.
  8. (Optional) If you want to use any integrations that require the Google Drive MCP or OneDrive MCP, go to the “Integrations” tab and enable those services by providing the necessary credentials.

You are now ready to create and run tasks with Errand AI.

To stop all services, press Ctrl+C in the terminal where Docker Compose is running, or run:

Terminal window
docker compose down

To stop and also remove stored data (database, cache), add the -v flag:

Terminal window
docker compose down -v

Errand supports horizontal scaling — you can add more worker replicas to execute tasks in parallel. For example, to run 3 workers:

Terminal window
docker compose up --build --scale worker=3

Each worker picks up tasks independently, so more workers means more tasks can run at the same time.

IssueSolution
port is already allocated errorAnother application is using port 8000. Change the port mapping in docker-compose.yml or stop the conflicting application
Services restart repeatedlyCheck logs with docker compose logs <service-name> to identify the failing service. Common causes are missing environment variables or invalid API keys
Cannot log in with default credentialsConfirm that ADMIN_USERNAME and ADMIN_PASSWORD are set correctly in your .env file and restart with docker compose up
LLM errors during task executionVerify that OPENAI_API_KEY and OPENAI_BASE_URL are correct. Check that your account has available credits with your LLM provider
CREDENTIAL_ENCRYPTION_KEY errorMake sure you generated a valid Fernet key and pasted the full value into .env with no extra spaces or line breaks