← All posts

Building Qumio

How a morning routine problem led to a Telegram AI assistant running in Docker.

3 min readQumio
OpenClawTelegramDocker
Building Qumio

I was opening five apps every morning before I'd made coffee. Gmail, Google Calendar, a task manager, Obsidian, a news feed. Not because each one was bad. They're all fine on their own. But none of them talked to the others. The context was spread across five places.

I wanted one place I could type "what's on today?" and get back my emails, meetings, and tasks in a single message. Not a new app. Not another dashboard. Just a chat I already had open.

Telegram was obvious. I have it on my phone and check it more than email. If I could add a bot to my own chat, I'd have something that lives in a place I'm already looking.

Finding OpenClaw

I looked at building on grammY first. It's a good library. Well documented, sensible API. But once I had the bot responding, I still needed to write the Gmail integration, the Calendar integration, and figure out how the model would decide which one to call. That's a small framework, not a simple weekend project.

Then I found OpenClaw. It's an open-source AI agent framework that runs in Docker. The pitch that sold me: capabilities are "skill" files. A skill is a markdown file with a prompt and metadata describing what it does and which APIs it can access. Drop a skill in the folder, the system hot-reloads in under a second. No code, no deploys. You iterate by editing a text file.

Phase 1 was just: does the bot respond on Telegram? Once that worked, I added Gmail as its own skill, then Calendar as another. Each one testable independently before adding the next.

How the configuration looks

Here's a simplified version of the Docker Compose setup that gets the whole thing running:

services:
  qumio:
    image: openclaw/agent:latest
    container_name: qumio
    restart: unless-stopped
    volumes:
      - ./skills:/app/skills
      - ./data:/app/data
    environment:
      - TELEGRAM_BOT_TOKEN=${TELEGRAM_BOT_TOKEN}
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - GOOGLE_CLIENT_ID=${GOOGLE_CLIENT_ID}
      - GOOGLE_CLIENT_SECRET=${GOOGLE_CLIENT_SECRET}
    ports:
      - "3001:3001"

One container, one compose file, no second service. The skills volume is the main working directory. New capabilities land there.

The OAuth problem

Getting Google OAuth working locally took longer than everything else combined. The browser consent flow needs to redirect somewhere. When you're running in Docker on WSL2, "somewhere" is surprisingly hard to define.

The fix was straightforward once I understood it: proxy the redirect from the container's port back to localhost on the host machine, then let the OAuth flow complete there. Small config change. Hours of debugging to find it.

The lesson: OAuth docs assume a public HTTPS endpoint. When you're building a local-only personal tool, nothing in the docs applies directly. You're on your own.

What works now

As of March 2026, the Telegram bot is running. I can ask for my emails, today's calendar, and search my Obsidian notes. Each skill is a separate file I can edit and test in isolation. The daily brief and task integration are next.

It's not a product. It's a personal tool. That distinction matters for how you build it. Single user, flat files, polling instead of webhooks, no scaling considerations. The simplest thing that works.