Migrate another batch of services to NFS.
This commit is contained in:
12
CLAUDE.md
12
CLAUDE.md
@@ -59,13 +59,19 @@ NixOS cluster configuration using flakes. Homelab infrastructure with Nomad/Cons
|
|||||||
|
|
||||||
## Migration Status
|
## Migration Status
|
||||||
|
|
||||||
**Phase**: 2 complete, ready for Phase 3
|
**Phase**: 4 in progress (20/35 services migrated)
|
||||||
**Current**: Migrating GlusterFS → NFS
|
**Current**: Migrating services from GlusterFS → NFS
|
||||||
**Next**: Copy data, update Nomad jobs, remove GlusterFS
|
**Next**: Finish migrating remaining services, update host volumes, remove GlusterFS
|
||||||
**Later**: Convert fractal to NixOS (deferred)
|
**Later**: Convert fractal to NixOS (deferred)
|
||||||
|
|
||||||
See `docs/MIGRATION_TODO.md` for detailed checklist.
|
See `docs/MIGRATION_TODO.md` for detailed checklist.
|
||||||
|
|
||||||
|
**IMPORTANT**: When working on migration tasks:
|
||||||
|
1. Always update `docs/MIGRATION_TODO.md` after completing each service migration
|
||||||
|
2. Update both the individual service checklist AND the summary counts at the bottom
|
||||||
|
3. Pattern: `/data/compute/appdata/foo` → `/data/services/foo` (NOT `/data/services/appdata/foo`!)
|
||||||
|
4. Migration workflow per service: stop → copy data → edit config → start → update MIGRATION_TODO.md
|
||||||
|
|
||||||
## Common Tasks
|
## Common Tasks
|
||||||
|
|
||||||
**Deploy a host**: `deploy -s '.#hostname'`
|
**Deploy a host**: `deploy -s '.#hostname'`
|
||||||
|
|||||||
@@ -60,7 +60,7 @@ See [CLUSTER_REVAMP.md](./CLUSTER_REVAMP.md) for detailed procedures.
|
|||||||
|
|
||||||
### Monitoring Stack (HIGH)
|
### Monitoring Stack (HIGH)
|
||||||
- [x] prometheus.hcl - migrated to `/data/services`
|
- [x] prometheus.hcl - migrated to `/data/services`
|
||||||
- [ ] grafana.hcl - still using `/data/compute`
|
- [x] grafana.hcl - migrated to `/data/services` (2025-10-23)
|
||||||
- [x] loki.hcl - migrated to `/data/services`
|
- [x] loki.hcl - migrated to `/data/services`
|
||||||
- [ ] vector.hcl - needs update to remove glusterfs log collection (line 26, 101-109)
|
- [ ] vector.hcl - needs update to remove glusterfs log collection (line 26, 101-109)
|
||||||
|
|
||||||
@@ -70,32 +70,32 @@ See [CLUSTER_REVAMP.md](./CLUSTER_REVAMP.md) for detailed procedures.
|
|||||||
|
|
||||||
### Web Applications (HIGH-MEDIUM)
|
### Web Applications (HIGH-MEDIUM)
|
||||||
- [x] wordpress.hcl - migrated to `/data/services`
|
- [x] wordpress.hcl - migrated to `/data/services`
|
||||||
- [ ] gitea.hcl - still using `/data/compute`
|
- [x] gitea.hcl - migrated to `/data/services` (2025-10-23)
|
||||||
- [ ] wiki.hcl - uses `appdata` volume (points to `/data/compute/appdata`)
|
- [ ] wiki.hcl - uses `appdata` volume (points to `/data/compute/appdata`)
|
||||||
- [x] plausible.hcl - stateless, no changes needed
|
- [x] plausible.hcl - stateless, no changes needed
|
||||||
- [ ] tiddlywiki.hcl - uses `appdata` volume (points to `/data/compute/appdata`)
|
- [ ] tiddlywiki.hcl - uses `appdata` volume (points to `/data/compute/appdata`)
|
||||||
|
|
||||||
### Web Applications (LOW, may be deprecated)
|
### Web Applications (LOW, may be deprecated)
|
||||||
- [ ] vikunja.hcl - still using `/data/compute` (check if still needed)
|
- [x] vikunja.hcl - migrated to `/data/services` (2025-10-23, not running)
|
||||||
|
|
||||||
### Media Stack (MEDIUM)
|
### Media Stack (MEDIUM)
|
||||||
- [x] media.hcl - migrated to `/data/services`
|
- [x] media.hcl - migrated to `/data/services`
|
||||||
|
|
||||||
### Utility Services (MEDIUM-LOW)
|
### Utility Services (MEDIUM-LOW)
|
||||||
- [x] evcc.hcl - migrated to `/data/services`
|
- [x] evcc.hcl - migrated to `/data/services`
|
||||||
- [ ] weewx.hcl - still using `/data/compute`
|
- [x] weewx.hcl - migrated to `/data/services` (2025-10-23)
|
||||||
- [x] code-server.hcl - migrated to `/data/services`
|
- [x] code-server.hcl - migrated to `/data/services`
|
||||||
- [x] beancount.hcl - migrated to `/data/services`
|
- [x] beancount.hcl - migrated to `/data/services`
|
||||||
- [x] adminer.hcl - stateless, no changes needed
|
- [x] adminer.hcl - stateless, no changes needed
|
||||||
- [x] maps.hcl - migrated to `/data/services`
|
- [x] maps.hcl - migrated to `/data/services`
|
||||||
- [x] netbox.hcl - migrated to `/data/services`
|
- [x] netbox.hcl - migrated to `/data/services`
|
||||||
- [ ] farmos.hcl - still using `/data/compute`
|
- [x] farmos.hcl - migrated to `/data/services` (2025-10-23)
|
||||||
- [x] urbit.hcl - migrated to `/data/services`
|
- [x] urbit.hcl - migrated to `/data/services`
|
||||||
- [ ] webodm.hcl - still using `/data/compute`
|
- [x] webodm.hcl - migrated to `/data/services` (2025-10-23, not running)
|
||||||
- [x] velutrack.hcl - migrated to `/data/services`
|
- [x] velutrack.hcl - migrated to `/data/services`
|
||||||
- [ ] resol-gateway.hcl - uses `code` volume (points to `/data/compute/code`)
|
- [ ] resol-gateway.hcl - uses `code` volume (points to `/data/compute/code`)
|
||||||
- [ ] igsync.hcl - uses `appdata` volume (points to `/data/compute/appdata`)
|
- [ ] igsync.hcl - uses `appdata` volume (points to `/data/compute/appdata`)
|
||||||
- [ ] jupyter.hcl - still using `/data/compute`
|
- [x] jupyter.hcl - migrated to `/data/services` (2025-10-23, not running)
|
||||||
- [x] whoami.hcl - stateless test service, no changes needed
|
- [x] whoami.hcl - stateless test service, no changes needed
|
||||||
|
|
||||||
### Backup Jobs (HIGH)
|
### Backup Jobs (HIGH)
|
||||||
@@ -144,18 +144,18 @@ See [CLUSTER_REVAMP.md](./CLUSTER_REVAMP.md) for detailed procedures.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Last updated**: 2025-10-23
|
**Last updated**: 2025-10-23 21:16
|
||||||
**Current phase**: Phase 4 in progress (19/35 services migrated, 11 still need migration, 4 stateless)
|
**Current phase**: Phase 4 in progress (26/35 services migrated, 4 host-volume services + config updates remaining, 4 stateless)
|
||||||
**Note**: Phase 1 (fractal NixOS conversion) deferred until after GlusterFS migration is complete
|
**Note**: Phase 1 (fractal NixOS conversion) deferred until after GlusterFS migration is complete
|
||||||
|
|
||||||
## Migration Summary
|
## Migration Summary
|
||||||
|
|
||||||
**Already migrated to `/data/services` (19 services):**
|
**Already migrated to `/data/services` (26 services):**
|
||||||
mysql, mysql-backup, postgres, postgres-backup, redis, clickhouse, prometheus, loki, unifi, wordpress, traefik, evcc, netbox, urbit, code-server, beancount, velutrack, maps, media
|
mysql, mysql-backup, postgres, postgres-backup, redis, clickhouse, prometheus, grafana, loki, unifi, wordpress, gitea, traefik, evcc, weewx, netbox, farmos, webodm, jupyter, vikunja, urbit, code-server, beancount, velutrack, maps, media
|
||||||
|
|
||||||
**Still need migration (11 services):**
|
**Still need migration (4 services using host volumes):**
|
||||||
- Direct `/data/compute` references: grafana, gitea, vikunja, weewx, farmos, webodm, jupyter
|
- wiki (appdata), tiddlywiki (appdata), igsync (appdata), resol-gateway (code)
|
||||||
- Host volume references: wiki (appdata), tiddlywiki (appdata), igsync (appdata), resol-gateway (code)
|
- These require updating common/nomad.nix host_volume definitions first
|
||||||
|
|
||||||
**Stateless/no changes needed (4 services):**
|
**Stateless/no changes needed (4 services):**
|
||||||
authentik, adminer, plausible, whoami
|
authentik, adminer, plausible, whoami
|
||||||
|
|||||||
@@ -19,8 +19,8 @@ job "farmos" {
|
|||||||
image = "gitea.v.paler.net/ppetru/farmos:latest"
|
image = "gitea.v.paler.net/ppetru/farmos:latest"
|
||||||
ports = ["http"]
|
ports = ["http"]
|
||||||
volumes = [
|
volumes = [
|
||||||
"/data/compute/appdata/farmos/sites:/opt/drupal/web/sites",
|
"/data/services/farmos/sites:/opt/drupal/web/sites",
|
||||||
"/data/compute/appdata/farmos/keys:/opt/drupal/keys",
|
"/data/services/farmos/keys:/opt/drupal/keys",
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -25,8 +25,8 @@ job "gitea" {
|
|||||||
"ssh",
|
"ssh",
|
||||||
]
|
]
|
||||||
volumes = [
|
volumes = [
|
||||||
"/data/compute/appdata/gitea/data:/var/lib/gitea",
|
"/data/services/gitea/data:/var/lib/gitea",
|
||||||
"/data/compute/appdata/gitea/config:/etc/gitea",
|
"/data/services/gitea/config:/etc/gitea",
|
||||||
"/etc/timezone:/etc/timezone:ro",
|
"/etc/timezone:/etc/timezone:ro",
|
||||||
"/etc/localtime:/etc/localtime:ro",
|
"/etc/localtime:/etc/localtime:ro",
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ job "grafana" {
|
|||||||
config {
|
config {
|
||||||
image = "grafana/grafana-enterprise:latest"
|
image = "grafana/grafana-enterprise:latest"
|
||||||
ports = [ "http" ]
|
ports = [ "http" ]
|
||||||
volumes = [ "/data/compute/appdata/grafana:/var/lib/grafana" ]
|
volumes = [ "/data/services/grafana:/var/lib/grafana" ]
|
||||||
}
|
}
|
||||||
|
|
||||||
env {
|
env {
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ job "jupyter" {
|
|||||||
ports = ["http"]
|
ports = ["http"]
|
||||||
|
|
||||||
volumes = [
|
volumes = [
|
||||||
"/data/compute/appdata/jupyter:/home/jovyan/work",
|
"/data/services/jupyter:/home/jovyan/work",
|
||||||
]
|
]
|
||||||
|
|
||||||
command = "start-notebook.py"
|
command = "start-notebook.py"
|
||||||
|
|||||||
@@ -19,9 +19,9 @@ job "vikunja" {
|
|||||||
image = "vikunja/vikunja:latest"
|
image = "vikunja/vikunja:latest"
|
||||||
ports = ["http"]
|
ports = ["http"]
|
||||||
volumes = [
|
volumes = [
|
||||||
"/data/compute/appdata/vikunja/config.yml:/app/vikunja/config.yml:ro",
|
"/data/services/vikunja/config.yml:/app/vikunja/config.yml:ro",
|
||||||
"/data/compute/appdata/vikunja/db:/db",
|
"/data/services/vikunja/db:/db",
|
||||||
"/data/compute/appdata/vikunja/files:/app/vikunja/files",
|
"/data/services/vikunja/files:/app/vikunja/files",
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -56,7 +56,7 @@ job "vikunja" {
|
|||||||
image = "typesense/typesense:27.1"
|
image = "typesense/typesense:27.1"
|
||||||
ports = ["http"]
|
ports = ["http"]
|
||||||
volumes = [
|
volumes = [
|
||||||
"/data/compute/appdata/vikunja/typesense:/data",
|
"/data/services/vikunja/typesense:/data",
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -33,7 +33,7 @@ job "odm" {
|
|||||||
ports = ["ui"]
|
ports = ["ui"]
|
||||||
command = "/webodm/start.sh"
|
command = "/webodm/start.sh"
|
||||||
volumes = [
|
volumes = [
|
||||||
"/data/compute/appdata/webodm:/webodm/app/media",
|
"/data/services/webodm:/webodm/app/media",
|
||||||
"local/local_settings.py:/webodm/webodm/local_settings.py:ro",
|
"local/local_settings.py:/webodm/webodm/local_settings.py:ro",
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -136,7 +136,7 @@ EOH
|
|||||||
command = "/webodm/worker.sh"
|
command = "/webodm/worker.sh"
|
||||||
args = ["start"]
|
args = ["start"]
|
||||||
volumes = [
|
volumes = [
|
||||||
"/data/compute/appdata/webodm:/webodm/app/media",
|
"/data/services/webodm:/webodm/app/media",
|
||||||
"local/local_settings.py:/webodm/webodm/local_settings.py:ro",
|
"local/local_settings.py:/webodm/webodm/local_settings.py:ro",
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -23,8 +23,8 @@ job "weewx" {
|
|||||||
# to be able to receive UDP broadcast packets from the weatherlink
|
# to be able to receive UDP broadcast packets from the weatherlink
|
||||||
network_mode = "host"
|
network_mode = "host"
|
||||||
volumes = [
|
volumes = [
|
||||||
"/data/compute/appdata/weewx/etc:/etc/weewx",
|
"/data/services/weewx/etc:/etc/weewx",
|
||||||
"/data/compute/appdata/weewx/html:/var/www/html",
|
"/data/services/weewx/html:/var/www/html",
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -46,7 +46,7 @@ job "weewx" {
|
|||||||
"-enable-health",
|
"-enable-health",
|
||||||
]
|
]
|
||||||
|
|
||||||
volumes = [ "/data/compute/appdata/weewx/html:/srv/http" ]
|
volumes = [ "/data/services/weewx/html:/srv/http" ]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user