Prompts as Software
Qumio's entire capability system is text files. No code, no deploys, no builds. Just prompts that hot-reload in under a second.

Every feature in Qumio is a "skill." A skill is a markdown file that describes what the feature does and which APIs it can call. No Python. No JavaScript. No compiled code at all.
When Qumio gets a message like "what's on my calendar today," the framework reads the skill files, picks the one that matches, and lets the model execute it. Adding a new feature means creating a new text file and saving it. The framework watches the folder and picks up changes in under a second.
I built five skills this way: email digest, today's calendar, quick notes to Obsidian, email-to-Obsidian, and news summaries. Each one is a separate file I can open in any text editor.
What iteration looks like
When the email digest wasn't grouping threads correctly, I opened the skill file, changed two sentences in the prompt, saved, and tested again immediately. No restart, no rebuild, no redeploy. The change was live before I switched back to the Telegram window.
The tradeoff is that debugging is different. There's no stack trace when something goes wrong. The model just does something unexpected and you have to figure out which part of the prompt it misread. Sometimes the fix is adding a word. Sometimes it's restructuring a whole paragraph.
Where it breaks down
Prompt-only architecture has a ceiling. The local model I use (27b parameters) starts compacting long prompts. Anything over about 30 lines of instructions and it starts ignoring parts. The news skill needed three rewrites before the prompt was short enough for the model to follow reliably.
For a personal tool with five features, it works well. For something larger, you'd probably want the critical path in actual code with prompts handling the fuzzy parts.