Docker Install
Docker Compose is the quickest way to run Errand AI on any machine with Docker installed. It sets up all the required services in containers and gets you to a working system in just a few steps.
Prerequisites
Section titled “Prerequisites”- Docker installed and running
- Docker Compose (included with Docker Desktop, or install separately on Linux)
- An API key from at least one LLM provider (e.g. Anthropic, OpenAI)
- (Optional) API keys for an LLM provider providing transcription models (eg. Groq’s whisper-large-v3)
- (Optional) API keys or credentials for any integrations you want to use (e.g. Google Drive, OneDrive, Slack)
These instructions assume that you will use LiteLLM to manage your connections to LLM providers. If you want to connect Errand directly to an LLM provider without using LiteLLM, see the Advanced Configuration section below.
Step 1: Clone the repository
Section titled “Step 1: Clone the repository”git clone https://github.com/errand-ai/errand.gitcd errand/deployStep 2: Configure environment variables
Section titled “Step 2: Configure environment variables”-
Copy the example environment file:
Terminal window cp .env.example .env -
Open
.envin a text editor and set the following values:Variable Description Default ADMIN_USERNAMEUsername for the admin account adminADMIN_PASSWORDPassword for the admin account changemeCREDENTIAL_ENCRYPTION_KEYEncryption key for stored credentials (see below) — LITELLM_MASTER_KEYMaster key for LiteLLM proxy authentication sk-12345678OPENAI_BASE_URLBase URL for your LLM provider (or a LiteLLM proxy) http://litellm:4000OPENAI_API_KEYYour LLM provider API key — -
Generate an encryption key by running this command in your terminal:
Terminal window python3 -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"Copy the output and paste it as the value for
CREDENTIAL_ENCRYPTION_KEYin your.envfile.
Step 3: Configure LLM provider (LiteLLM)
Section titled “Step 3: Configure LLM provider (LiteLLM)”The easiest way to connect Errand to an LLM provider is through LiteLLM, which acts as a proxy and unified interface for multiple providers. To set this up:
docker compose up litellmThis will start the PostgreSQL and LiteLLM services. You can then access the LiteLLM dashboard at http://localhost:3000 to add your LLM provider API keys and configure models.
The default ‘admin’ password for the LiteLLM dashboard is sk-12345678, but you can change this by setting
a different value for LITELLM_MASTER_KEY in your .env file.
There are 2 steps we need to do in the LiteLLM dashboard:
Step A: Add an LLM provider
Section titled “Step A: Add an LLM provider”The first step is to add a credential for your LLM provider, then create a model entry that uses that credential.
-
Go to the “Models + Endpoints” page and select the “LLM Credentials”tab and click “Add Credential”
-
Choose your provider from the dropdown (e.g. OpenAI, Anthropic, Groq, etc.)
-
Enter a name for this credential (e.g. “OpenAI Account”) and paste your API key
-
Click “Add Credential”
-
Select the “Add Model” tab.
-
Choose the provider you just added the credential for.
-
Choose the credential you just created from the dropdown,
-
then select a model to add (e.g.
gpt-4,gemini-2.5-flash,groq-whisper-large-v3, etc.) -
Click “Test Connection” to verify that LiteLLM can connect to the provider with the provided API key. You should see a success message if everything is correct.
-
Click “Add Model” to save.
-
Copy the “Model Name” value (e.g.
gpt-4) and set it as the value forHINDSIGHT_API_LLM_MODELin your.envfile.
Repeat steps 5-10 for any additional models you want to use. For help deciding which models to add, see the AI Models guide.
Step B: Create Virtual Keys
Section titled “Step B: Create Virtual Keys”The second step is to create “virtual keys” that the Errand and Hindsight services will use to access the LLM provider through LiteLLM. This allows you to rotate or change your actual API keys in LiteLLM without needing to update the Errand configuration.
- Go to the “Virtual Keys” page in the LiteLLM dashboard.
- Click “Create New Key”.
- Select “Service Account” as the key owner.
- Enter “errand” as the service account ID.
- In the Models section, you can either select specific models that this key should have access to, or select “All Team Models”.
- Click “Create Key” to generate the virtual key.
- Copy the generated virtual key value (it will look like
sk-xxxxxx) and paste it into theOPENAI_API_KEYvariable in your.envfile.
Step 4: Start Errand
Section titled “Step 4: Start Errand”Run the following command from the project directory to start all the remaining services:
docker compose upDocker Compose will start the following services:
| Service | Purpose |
|---|---|
| PostgreSQL | Database for tasks, users, and configuration |
| LiteLLM | Proxy for connecting to LLM providers |
| Hindsight | Persistent memory for AI agents |
| Valkey | In-memory cache for real-time coordination |
| Errand Server | API server and web UI (port 8000) |
| Google Drive MCP | File access for Google Drive integration |
| OneDrive MCP | File access for OneDrive integration |
Wait until you see log messages indicating that the server is ready.
Step 4: Open the Errand UI
Section titled “Step 4: Open the Errand UI”- Open your web browser
- Navigate to http://localhost:8000
- Log in with the admin credentials you set in your
.envfile (default:admin/changeme) - Select the “Settings” page and the “Task Management” tab.
- You should see LiteLLM listed as the LLM provider. Select the model to use for the task description parsing and initial processing.
- Select the “Default Model” to use for task execution. This can be the same model or a different one from the one used for task management.
- (Optional) If you added a transcription model in LiteLLM, select that model in the “Transcription Model” dropdown to enable audio transcription capabilities.
- (Optional) If you want to use any integrations that require the Google Drive MCP or OneDrive MCP, go to the “Integrations” tab and enable those services by providing the necessary credentials.
You are now ready to create and run tasks with Errand AI.
Stopping Errand
Section titled “Stopping Errand”To stop all services, press Ctrl+C in the terminal where Docker Compose is running, or run:
docker compose downTo stop and also remove stored data (database, cache), add the -v flag:
docker compose down -vScaling workers
Section titled “Scaling workers”Errand supports horizontal scaling — you can add more worker replicas to execute tasks in parallel. For example, to run 3 workers:
docker compose up --build --scale worker=3Each worker picks up tasks independently, so more workers means more tasks can run at the same time.
Troubleshooting
Section titled “Troubleshooting”| Issue | Solution |
|---|---|
port is already allocated error | Another application is using port 8000. Change the port mapping in docker-compose.yml or stop the conflicting application |
| Services restart repeatedly | Check logs with docker compose logs <service-name> to identify the failing service. Common causes are missing environment variables or invalid API keys |
| Cannot log in with default credentials | Confirm that ADMIN_USERNAME and ADMIN_PASSWORD are set correctly in your .env file and restart with docker compose up |
| LLM errors during task execution | Verify that OPENAI_API_KEY and OPENAI_BASE_URL are correct. Check that your account has available credits with your LLM provider |
CREDENTIAL_ENCRYPTION_KEY error | Make sure you generated a valid Fernet key and pasted the full value into .env with no extra spaces or line breaks |