TazPod Nomadic Workflow
TazPod is designed so the operator can rebuild the execution environment on another host without carrying plaintext credentials or a long manual checklist.
What Actually Has to Exist
A fresh host only needs:
- Docker installed and usable
- a checkout of the project repository containing
.tazpod/ - a valid AWS login path for S3-backed vault recovery
The durable anchor is not the container. The durable anchor is the encrypted vault object.
Local Project Initialization
initProject() in cmd/tazpod/init.go creates:
.tazpod/.tazpod/vault/.tazpod/config.yaml
Defaults include:
- image:
tazzo/tazpod-ai:latest - container name:
<current-folder>-lab - user:
tazpod ghost_mode: true
Smart Recovery Path
The real nomadic recovery path is encoded in smartEntry() in cmd/tazpod/lifecycle.go.
Case 1: Local encrypted vault exists
If .tazpod/vault/vault.tar.aes is present:
ensureContainerUp()creates or starts the container- the CLI offers
tazpod unlock - after unlock, it enters the shell
- on shell exit, it auto-runs
lock()
Case 2: Local encrypted vault is missing
If the local vault is absent, smartEntry() offers the bootstrap path:
tazpod logintazpod pull vaulttazpod unlock- enter shell
This is why TazPod is actually nomadic in practice: recovery is built into the default entry path instead of living only in a separate manual runbook.
S3 Contract
From internal/utils/s3.go and cmd/tazpod/sync.go:
- default bucket:
tazlab-storage - default region:
eu-central-1 - object key:
tazpod/vault/vault.tar.aes
pullVault() downloads that object into the project-local .tazpod/vault/vault.tar.aes.
pushVaultInternal() uploads the same file back to S3.
Why This Matters
The container is disposable, the host is replaceable, but the operator identity can still survive because the encrypted vault remains portable and the recovery sequence is already encoded in the CLI.
See Also
- Runtime: TazPod Architecture
- Vault: TazPod Vault Security
- Reference: TazPod CLI Reference
- Hub: TazPod Entity