Compare commits

...

8 Commits

Author SHA1 Message Date
7733a1be46 yet another replication fix. 2025-11-04 19:57:52 +00:00
a5df98bc5a Update docs. 2025-11-04 19:08:27 +00:00
fb9b0dd2f5 Move NFS server to sparky. 2025-11-04 19:00:18 +00:00
0dc214069c Fix curl-induced failures. 2025-11-04 18:59:50 +00:00
a6c4be9530 Use clone source for btrfs send. 2025-11-04 17:51:34 +00:00
6e338e6d65 Stop replicating to c1. 2025-11-04 14:03:49 +00:00
41f16fa0b8 Make sparky a standby again. 2025-11-04 12:58:34 +00:00
1b05728817 Switch to Pocket ID. 2025-11-04 12:58:15 +00:00
23 changed files with 95 additions and 90 deletions

View File

@@ -8,21 +8,15 @@ NixOS cluster configuration using flakes. Homelab infrastructure with Nomad/Cons
├── common/ ├── common/
│ ├── global/ # Applied to all hosts (backup, sops, users, etc.) │ ├── global/ # Applied to all hosts (backup, sops, users, etc.)
│ ├── minimal-node.nix # Base (ssh, user, boot, impermanence) │ ├── minimal-node.nix # Base (ssh, user, boot, impermanence)
│ ├── cluster-member.nix # Consul + storage clients (NFS/CIFS/GlusterFS) │ ├── cluster-member.nix # Consul agent + storage mounts (NFS/CIFS)
│ ├── nomad-worker.nix # Nomad client (runs jobs) + Docker + NFS deps │ ├── nomad-worker.nix # Nomad client (runs jobs) + Docker + NFS deps
│ ├── nomad-server.nix # Enables Consul + Nomad server mode │ ├── nomad-server.nix # Enables Consul + Nomad server mode
│ ├── cluster-tools.nix # Just CLI tools (nomad, wander, damon) │ ├── cluster-tools.nix # Just CLI tools (nomad, wander, damon)
│ ├── workstation-node.nix # Dev tools (wget, deploy-rs, docker, nix-ld) │ ├── workstation-node.nix # Dev tools (wget, deploy-rs, docker, nix-ld)
│ ├── desktop-node.nix # Hyprland + GUI environment │ ├── desktop-node.nix # Hyprland + GUI environment
│ ├── nfs-services-server.nix # NFS server + btrfs replication (zippy) │ ├── nfs-services-server.nix # NFS server + btrfs replication
│ └── nfs-services-standby.nix # NFS standby + receive replication (c1) │ └── nfs-services-standby.nix # NFS standby + receive replication
├── hosts/ ├── hosts/ # Host configs - check imports for roles
│ ├── c1/, c2/, c3/ # Cattle nodes (quorum + workers)
│ ├── zippy/ # Primary storage + NFS server + worker (not quorum)
│ ├── chilly/ # Home Assistant VM + cluster member (Consul only)
│ ├── sparky/ # Desktop + cluster member (Consul only)
│ ├── fractal/ # (Proxmox, will become NixOS storage node)
│ └── sunny/ # (Standalone ethereum node, not in cluster)
├── docs/ ├── docs/
│ ├── CLUSTER_REVAMP.md # Master plan for architecture changes │ ├── CLUSTER_REVAMP.md # Master plan for architecture changes
│ ├── MIGRATION_TODO.md # Tracking checklist for migration │ ├── MIGRATION_TODO.md # Tracking checklist for migration
@@ -33,17 +27,15 @@ NixOS cluster configuration using flakes. Homelab infrastructure with Nomad/Cons
## Current Architecture ## Current Architecture
### Storage Mounts ### Storage Mounts
- `/data/services` - NFS from `data-services.service.consul` (zippy primary, c1 standby) - `/data/services` - NFS from `data-services.service.consul` (check nfs-services-server.nix for primary)
- `/data/media` - CIFS from fractal (existing, unchanged) - `/data/media` - CIFS from fractal
- `/data/shared` - CIFS from fractal (existing, unchanged) - `/data/shared` - CIFS from fractal
### Hosts ### Cluster Roles (check hosts/*/default.nix for each host's imports)
- **c1, c2, c3**: Cattle nodes, run most workloads, Nomad/Consul quorum members - **Quorum**: hosts importing `nomad-server.nix` (3 expected for consensus)
- **zippy**: Primary NFS server, runs workloads (affinity), NOT quorum, replicates to c1 every 5min - **Workers**: hosts importing `nomad-worker.nix` (run Nomad jobs)
- **chilly**: Home Assistant VM, cluster member (Consul agent + CLI tools), no workloads - **NFS server**: host importing `nfs-services-server.nix` (affinity for direct disk access like DBs)
- **sparky**: Desktop/laptop, cluster member (Consul agent + CLI tools), no workloads - **Standby**: hosts importing `nfs-services-standby.nix` (receive replication)
- **fractal**: Storage node (Proxmox/ZFS), will join quorum after GlusterFS removed
- **sunny**: Standalone ethereum staking node (not in cluster)
## Config Architecture ## Config Architecture
@@ -58,19 +50,22 @@ NixOS cluster configuration using flakes. Homelab infrastructure with Nomad/Cons
- `workstation-node.nix` - Dev tools (deploy-rs, docker, nix-ld, emulation) - `workstation-node.nix` - Dev tools (deploy-rs, docker, nix-ld, emulation)
- `desktop-node.nix` - Extends workstation + Hyprland/GUI - `desktop-node.nix` - Extends workstation + Hyprland/GUI
**Host composition examples**: **Composition patterns**:
- c1/c2/c3: `cluster-member + nomad-worker + nomad-server` (quorum + runs jobs) - Quorum member: `cluster-member + nomad-worker + nomad-server`
- zippy: `cluster-member + nomad-worker` (runs jobs, not quorum) - Worker only: `cluster-member + nomad-worker`
- chilly/sparky: `cluster-member + cluster-tools` (Consul + CLI only) - CLI only: `cluster-member + cluster-tools` (Consul agent, no Nomad service)
- NFS primary: `cluster-member + nomad-worker + nfs-services-server`
- Standalone: `minimal-node` only (no cluster membership)
**Key insight**: Profiles (workstation/desktop) no longer imply cluster membership. Hosts explicitly declare roles via imports. **Key insight**: Profiles (workstation/desktop) don't imply cluster roles. Check imports for actual roles.
## Key Patterns ## Key Patterns
**NFS Server/Standby**: **NFS Server/Standby**:
- Primary (zippy): imports `nfs-services-server.nix`, sets `standbys = ["c1"]` - Primary: imports `nfs-services-server.nix`, sets `standbys = [...]`
- Standby (c1): imports `nfs-services-standby.nix`, sets `replicationKeys = [...]` - Standby: imports `nfs-services-standby.nix`, sets `replicationKeys = [...]`
- Replication: btrfs send/receive every 5min, incremental with fallback to full - Replication: btrfs send/receive every 5min, incremental with fallback to full
- Check host configs for current primary/standby assignments
**Backups**: **Backups**:
- Kopia client on all nodes → Kopia server on fractal - Kopia client on all nodes → Kopia server on fractal
@@ -92,7 +87,7 @@ See `docs/MIGRATION_TODO.md` for detailed checklist.
**Deploy a host**: `deploy -s '.#hostname'` **Deploy a host**: `deploy -s '.#hostname'`
**Deploy all**: `deploy` **Deploy all**: `deploy`
**Check replication**: `ssh zippy journalctl -u replicate-services-to-c1.service -f` **Check replication**: Check NFS primary host, then `ssh <primary> journalctl -u replicate-services-to-*.service -f`
**NFS failover**: See `docs/NFS_FAILOVER.md` **NFS failover**: See `docs/NFS_FAILOVER.md`
**Nomad jobs**: `services/*.hcl` - service data stored at `/data/services/<service-name>` **Nomad jobs**: `services/*.hcl` - service data stored at `/data/services/<service-name>`
@@ -106,8 +101,8 @@ See `docs/MIGRATION_TODO.md` for detailed checklist.
## Important Files ## Important Files
- `common/global/backup.nix` - Kopia backup configuration - `common/global/backup.nix` - Kopia backup configuration
- `hosts/zippy/default.nix` - NFS server config, replication targets - `common/nfs-services-server.nix` - NFS server role (check hosts for which imports this)
- `hosts/c1/default.nix` - NFS standby config, authorized replication keys - `common/nfs-services-standby.nix` - NFS standby role (check hosts for which imports this)
- `flake.nix` - Host definitions, nixpkgs inputs - `flake.nix` - Host definitions, nixpkgs inputs
--- ---

View File

@@ -133,13 +133,15 @@ in
echo "Attempting incremental send from $(basename $PREV_LOCAL) to ${standby}" echo "Attempting incremental send from $(basename $PREV_LOCAL) to ${standby}"
# Try incremental send, if it fails (e.g., parent missing on receiver), fall back to full # Try incremental send, if it fails (e.g., parent missing on receiver), fall back to full
if btrfs send -p "$PREV_LOCAL" "$SNAPSHOT_PATH" | \ # Use -c to help with broken Received UUID chains
if btrfs send -p "$PREV_LOCAL" -c "$PREV_LOCAL" "$SNAPSHOT_PATH" | \
ssh -i "$SSH_KEY" -o StrictHostKeyChecking=accept-new root@${standby} \ ssh -i "$SSH_KEY" -o StrictHostKeyChecking=accept-new root@${standby} \
"btrfs receive /persist/services-standby"; then "btrfs receive /persist/services-standby"; then
echo "Incremental send completed successfully" echo "Incremental send completed successfully"
REPLICATION_SUCCESS=1 REPLICATION_SUCCESS=1
else else
echo "Incremental send failed (likely missing parent on receiver), falling back to full send" echo "Incremental send failed (likely missing parent on receiver), falling back to full send"
# Plain full send without clone source (receiver may have no snapshots)
btrfs send "$SNAPSHOT_PATH" | \ btrfs send "$SNAPSHOT_PATH" | \
ssh -i "$SSH_KEY" -o StrictHostKeyChecking=accept-new root@${standby} \ ssh -i "$SSH_KEY" -o StrictHostKeyChecking=accept-new root@${standby} \
"btrfs receive /persist/services-standby" "btrfs receive /persist/services-standby"
@@ -163,7 +165,7 @@ in
SNAPSHOT_COUNT=$(ls -1d /persist/services@* 2>/dev/null | wc -l) SNAPSHOT_COUNT=$(ls -1d /persist/services@* 2>/dev/null | wc -l)
# Push metrics to Prometheus pushgateway # Push metrics to Prometheus pushgateway
cat <<METRICS | curl --data-binary @- http://pushgateway.service.consul:9091/metrics/job/nfs_replication/instance/${standby} cat <<METRICS | curl -s --data-binary @- http://pushgateway.service.consul:9091/metrics/job/nfs_replication/instance/${standby} || true
# TYPE nfs_replication_last_success_timestamp gauge # TYPE nfs_replication_last_success_timestamp gauge
nfs_replication_last_success_timestamp $END_TIME nfs_replication_last_success_timestamp $END_TIME
# TYPE nfs_replication_duration_seconds gauge # TYPE nfs_replication_duration_seconds gauge

View File

@@ -54,7 +54,7 @@ in
SNAPSHOT_COUNT=$(ls -1d /persist/services-standby/services@* 2>/dev/null | wc -l) SNAPSHOT_COUNT=$(ls -1d /persist/services-standby/services@* 2>/dev/null | wc -l)
# Push metrics to Prometheus pushgateway # Push metrics to Prometheus pushgateway
cat <<METRICS | curl --data-binary @- http://pushgateway.service.consul:9091/metrics/job/nfs_standby_cleanup/instance/$(hostname) cat <<METRICS | curl -s --data-binary @- http://pushgateway.service.consul:9091/metrics/job/nfs_standby_cleanup/instance/$(hostname) || true
# TYPE nfs_standby_snapshot_count gauge # TYPE nfs_standby_snapshot_count gauge
nfs_standby_snapshot_count $SNAPSHOT_COUNT nfs_standby_snapshot_count $SNAPSHOT_COUNT
# TYPE nfs_standby_cleanup_last_run_timestamp gauge # TYPE nfs_standby_cleanup_last_run_timestamp gauge

View File

@@ -1,5 +1,6 @@
* remote docker images used, can't come up if internet is down * remote docker images used, can't come up if internet is down
* local docker images pulled from gitea, can't come up if gitea isn't up (yet) * local docker images pulled from gitea, can't come up if gitea isn't up (yet)
* traefik-oidc-auth plugin downloaded from GitHub at startup (cached in /data/services/traefik/plugins-storage)
* renovate system of some kind * renovate system of some kind
* vector (or other log ingestion) everywhere, consider moving it off docker if possible * vector (or other log ingestion) everywhere, consider moving it off docker if possible
* monitor backup-persist success/fail * monitor backup-persist success/fail

View File

@@ -23,8 +23,8 @@
networking.hostName = "c1"; networking.hostName = "c1";
services.tailscaleAutoconnect.authkey = "tskey-auth-k2nQ771YHM11CNTRL-YVpoumL2mgR6nLPG51vNhRpEKMDN7gLAi"; services.tailscaleAutoconnect.authkey = "tskey-auth-k2nQ771YHM11CNTRL-YVpoumL2mgR6nLPG51vNhRpEKMDN7gLAi";
# NFS standby configuration: accept replication from zippy
nfsServicesStandby.replicationKeys = [ nfsServicesStandby.replicationKeys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHyTKsMCbwCIlMcC/aopgz5Yfx/Q9QdlWC9jzMLgYFAV root@zippy-replication" "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHyTKsMCbwCIlMcC/aopgz5Yfx/Q9QdlWC9jzMLgYFAV root@zippy-replication"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO5s73FSUiysHijWRGYCJY8lCtZkX1DGKAqp2671REDq root@sparky-replication"
]; ];
} }

View File

@@ -5,6 +5,11 @@
../../common/global ../../common/global
../../common/cluster-member.nix ../../common/cluster-member.nix
../../common/nomad-worker.nix ../../common/nomad-worker.nix
../../common/nfs-services-server.nix
# To move NFS server role to another host:
# 1. Follow procedure in docs/NFS_FAILOVER.md
# 2. Replace above line with: ../../common/nfs-services-standby.nix
# 3. Add nfsServicesStandby.replicationKeys with the new server's public key
./hardware.nix ./hardware.nix
]; ];
@@ -16,4 +21,6 @@
networking.hostName = "sparky"; networking.hostName = "sparky";
services.tailscaleAutoconnect.authkey = "tskey-auth-k6VC79UrzN11CNTRL-rvPmd4viyrQ261ifCrfTrQve7c2FesxrG"; services.tailscaleAutoconnect.authkey = "tskey-auth-k6VC79UrzN11CNTRL-rvPmd4viyrQ261ifCrfTrQve7c2FesxrG";
nfsServicesServer.standbys = [ "c1" ];
} }

View File

@@ -5,12 +5,6 @@
../../common/global ../../common/global
../../common/cluster-member.nix # Consul + storage clients ../../common/cluster-member.nix # Consul + storage clients
../../common/nomad-worker.nix # Nomad client (runs jobs) ../../common/nomad-worker.nix # Nomad client (runs jobs)
# NOTE: zippy is NOT a server - no nomad-server.nix import
../../common/nfs-services-server.nix # NFS server for /data/services
# To move NFS server role to another host:
# 1. Follow procedure in docs/NFS_FAILOVER.md
# 2. Replace above line with: ../../common/nfs-services-standby.nix
# 3. Add nfsServicesStandby.replicationKeys with the new server's public key
./hardware.nix ./hardware.nix
]; ];
@@ -22,8 +16,4 @@
networking.hostName = "zippy"; networking.hostName = "zippy";
services.tailscaleAutoconnect.authkey = "tskey-auth-ktKyQ59f2p11CNTRL-ut8E71dLWPXsVtb92hevNX9RTjmk4owBf"; services.tailscaleAutoconnect.authkey = "tskey-auth-ktKyQ59f2p11CNTRL-ut8E71dLWPXsVtb92hevNX9RTjmk4owBf";
nfsServicesServer.standbys = [
"c1"
];
} }

View File

@@ -27,7 +27,7 @@ job "adminer" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.adminer.entryPoints=websecure", "traefik.http.routers.adminer.entryPoints=websecure",
"traefik.http.routers.adminer.middlewares=authentik@file", "traefik.http.routers.adminer.middlewares=oidc-auth@file",
] ]
} }
} }

View File

@@ -37,7 +37,7 @@ job "beancount" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.finances.entryPoints=websecure", "traefik.http.routers.finances.entryPoints=websecure",
"traefik.http.routers.finances.middlewares=authentik@file", "traefik.http.routers.finances.middlewares=oidc-auth@file",
] ]
} }

View File

@@ -49,7 +49,7 @@ job "evcc" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.evcc.entryPoints=websecure", "traefik.http.routers.evcc.entryPoints=websecure",
"traefik.http.routers.evcc.middlewares=authentik@file", "traefik.http.routers.evcc.middlewares=oidc-auth@file",
] ]
} }
} }

View File

@@ -25,19 +25,22 @@ job "grafana" {
GF_SERVER_ROOT_URL = "https://grafana.v.paler.net" GF_SERVER_ROOT_URL = "https://grafana.v.paler.net"
GF_AUTH_BASIC_ENABLED = "false" GF_AUTH_BASIC_ENABLED = "false"
GF_AUTH_GENERIC_OAUTH_ENABLED = "true" GF_AUTH_GENERIC_OAUTH_ENABLED = "true"
GF_AUTH_GENERIC_OAUTH_NAME = "authentik" GF_AUTH_GENERIC_OAUTH_NAME = "Pocket ID"
GF_AUTH_GENERIC_OAUTH_CLIENT_ID = "E78NG1AZeW6FaAox0mUhaTSrHeqFgNkWG12My2zx" GF_AUTH_GENERIC_OAUTH_CLIENT_ID = "99e44cf2-ecc6-4e82-8882-129c017f8a4a"
GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET = "N7u2RfFZ5KVLdEkhlpUTzymGxeK5rLo9SYZLSGGBXJDr46p5g5uv1qZ4Jm2d1rP4aJX4PSzauZlxHhkG2byiBFMbdo6K742KXcEimZsOBFiNKeWOHxofYerBnPuoECQW" GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET = "NjJ9Uro4MK7siqLGSmkiQmjFuESulqQN"
GF_AUTH_GENERIC_OAUTH_SCOPES = "openid profile email offline_access" GF_AUTH_GENERIC_OAUTH_SCOPES = "openid profile email groups"
GF_AUTH_GENERIC_OAUTH_AUTH_URL = "https://authentik.v.paler.net/application/o/authorize/" GF_AUTH_GENERIC_OAUTH_AUTH_URL = "https://pocket-id.v.paler.net/authorize"
GF_AUTH_GENERIC_OAUTH_TOKEN_URL = "https://authentik.v.paler.net/application/o/token/" GF_AUTH_GENERIC_OAUTH_TOKEN_URL = "https://pocket-id.v.paler.net/api/oidc/token"
GF_AUTH_GENERIC_OAUTH_API_URL = "https://authentik.v.paler.net/application/o/userinfo/" GF_AUTH_GENERIC_OAUTH_API_URL = "https://pocket-id.v.paler.net/api/oidc/userinfo"
GF_AUTH_SIGNOUT_REDIRECT_URL = "https://authentik.v.paler.net/application/o/grafana/end-session/" GF_AUTH_SIGNOUT_REDIRECT_URL = "https://pocket-id.v.paler.net/logout"
# Optionally enable auto-login (bypasses Grafana login screen) # Optionally enable auto-login (bypasses Grafana login screen)
GF_AUTH_OAUTH_AUTO_LOGIN = "true" GF_AUTH_OAUTH_AUTO_LOGIN = "true"
# Optionally map user groups to Grafana roles # Optionally map user groups to Grafana roles
GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH = "contains(groups[*], 'Grafana Admins') && 'Admin' || contains(groups[*], 'Grafana Editors') && 'Editor' || 'Viewer'" GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH = "contains(groups[*], 'admins') && 'Admin' || contains(groups[*], 'residents') && 'Editor' || 'Viewer'"
GF_AUTH_GENERIC_OAUTH_USE_REFRESH_TOKEN = "true" GF_AUTH_GENERIC_OAUTH_USE_REFRESH_TOKEN = "true"
GF_AUTH_GENERIC_OAUTH_EMAIL_ATTRIBUTE_PATH = "email"
GF_AUTH_GENERIC_OAUTH_LOGIN_ATTRIBUTE_PATH = "preferred_username"
GF_AUTH_GENERIC_OAUTH_NAME_ATTRIBUTE_PATH = "name"
#GF_LOG_LEVEL = "debug" #GF_LOG_LEVEL = "debug"
} }

View File

@@ -38,7 +38,7 @@ job "jupyter" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.jupyter.entryPoints=websecure", "traefik.http.routers.jupyter.entryPoints=websecure",
"traefik.http.routers.jupyter.middlewares=authentik@file", "traefik.http.routers.jupyter.middlewares=oidc-auth@file",
] ]
} }
} }

View File

@@ -126,7 +126,7 @@ EOH
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.loki.entryPoints=websecure", "traefik.http.routers.loki.entryPoints=websecure",
"traefik.http.routers.loki.middlewares=authentik@file", "traefik.http.routers.loki.middlewares=oidc-auth@file",
"metrics", "metrics",
] ]
} }

View File

@@ -44,7 +44,7 @@ job "media" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.radarr.entryPoints=websecure", "traefik.http.routers.radarr.entryPoints=websecure",
"traefik.http.routers.radarr.middlewares=authentik@file", "traefik.http.routers.radarr.middlewares=oidc-auth@file",
] ]
} }
} }
@@ -78,7 +78,7 @@ job "media" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.sonarr.entryPoints=websecure", "traefik.http.routers.sonarr.entryPoints=websecure",
"traefik.http.routers.sonarr.middlewares=authentik@file", "traefik.http.routers.sonarr.middlewares=oidc-auth@file",
] ]
} }
} }
@@ -112,7 +112,7 @@ job "media" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.bazarr.entryPoints=websecure", "traefik.http.routers.bazarr.entryPoints=websecure",
"traefik.http.routers.bazarr.middlewares=authentik@file", "traefik.http.routers.bazarr.middlewares=oidc-auth@file",
] ]
} }
} }
@@ -148,7 +148,7 @@ job "media" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.plex.entryPoints=websecure", "traefik.http.routers.plex.entryPoints=websecure",
"traefik.http.routers.plex.middlewares=authentik@file", "traefik.http.routers.plex.middlewares=oidc-auth@file",
] ]
} }
} }
@@ -187,7 +187,7 @@ job "media" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.torrent.entryPoints=websecure", "traefik.http.routers.torrent.entryPoints=websecure",
"traefik.http.routers.torrent.middlewares=authentik@file", "traefik.http.routers.torrent.middlewares=oidc-auth@file",
] ]
} }
} }

View File

@@ -39,10 +39,10 @@ job "netbox" {
REMOTE_AUTH_ENABLED = "true" REMOTE_AUTH_ENABLED = "true"
REMOTE_AUTH_BACKEND = "social_core.backends.open_id_connect.OpenIdConnectAuth" REMOTE_AUTH_BACKEND = "social_core.backends.open_id_connect.OpenIdConnectAuth"
SOCIAL_AUTH_OIDC_ENDPOINT = "https://authentik.v.paler.net/application/o/netbox/" SOCIAL_AUTH_OIDC_ENDPOINT = "https://pocket-id.v.paler.net/"
SOCIAL_AUTH_OIDC_KEY = "XiPhZmWy2mp8hQyHLXCwk7njRNPSLTp2vSHhvWYI" SOCIAL_AUTH_OIDC_KEY = "6ce1f1bb-d5e8-4ba5-b136-2643dc8bcbcf"
SOCIAL_AUTH_OIDC_SECRET = "Kkop2dStx0gN52V1LfPnoxcaemuur6zMsvRnqpWSDe2qSngJVcqWfvFXaNeTbdURRB6TPwjlaNJ5BXR2ChcSmokWGTGargu84Ox1D6M2zXTsfLFj9B149Mhblos4mJL1" SOCIAL_AUTH_OIDC_SECRET = "Af7sJvCn9BuijoJXrB5aWv6fTmEqLCAf"
LOGOUT_REDIRECT_URL = "https://authentik.v.paler.net/application/o/netbox/end-session/" LOGOUT_REDIRECT_URL = "https://pocket-id.v.paler.net/logout"
} }
resources { resources {

View File

@@ -91,15 +91,15 @@ job "postgres" {
PGADMIN_CONFIG_OAUTH2_AUTO_CREATE_USER = "True" PGADMIN_CONFIG_OAUTH2_AUTO_CREATE_USER = "True"
PGADMIN_CONFIG_OAUTH2_CONFIG = <<EOH PGADMIN_CONFIG_OAUTH2_CONFIG = <<EOH
[{ [{
'OAUTH2_NAME' : 'authentik', 'OAUTH2_NAME' : 'pocket-id',
'OAUTH2_DISPLAY_NAME' : 'SSO', 'OAUTH2_DISPLAY_NAME' : 'SSO',
'OAUTH2_CLIENT_ID' : 'o4p3B03ayTQ2kpwmM7GswbcfO78JHCTdoZqKJEut', 'OAUTH2_CLIENT_ID' : '180133da-1bd7-4cde-9c18-2f277e962dab',
'OAUTH2_CLIENT_SECRET' : '7UYHONOCVdjpRMK9Ojwds0qPPpxCiztbIRhK7FJ2IFBpUgN6tnmpEjlkPYimiGKfaHLhy4XE7kQm7Et1Jm0hgyia0iB1VIlp623ckppbwkM6IfpTE1LfEmTMtPrxSngx', 'OAUTH2_CLIENT_SECRET' : 'ELYNAfiWSGYJQUXUDOdpm7tTtyLbrs4E',
'OAUTH2_TOKEN_URL' : 'https://authentik.v.paler.net/application/o/token/', 'OAUTH2_TOKEN_URL' : 'https://pocket-id.v.paler.net/api/oidc/token',
'OAUTH2_AUTHORIZATION_URL' : 'https://authentik.v.paler.net/application/o/authorize/', 'OAUTH2_AUTHORIZATION_URL' : 'https://pocket-id.v.paler.net/authorize',
'OAUTH2_API_BASE_URL' : 'https://authentik.v.paler.net/', 'OAUTH2_API_BASE_URL' : 'https://pocket-id.v.paler.net/',
'OAUTH2_USERINFO_ENDPOINT' : 'https://authentik.v.paler.net/application/o/userinfo/', 'OAUTH2_USERINFO_ENDPOINT' : 'https://pocket-id.v.paler.net/api/oidc/userinfo',
'OAUTH2_SERVER_METADATA_URL' : 'https://authentik.v.paler.net/application/o/pgadmin/.well-known/openid-configuration', 'OAUTH2_SERVER_METADATA_URL' : 'https://pocket-id.v.paler.net/.well-known/openid-configuration',
'OAUTH2_SCOPE' : 'openid email profile', 'OAUTH2_SCOPE' : 'openid email profile',
'OAUTH2_ICON' : 'fa-database', 'OAUTH2_ICON' : 'fa-database',
'OAUTH2_BUTTON_COLOR' : '#00ff00' 'OAUTH2_BUTTON_COLOR' : '#00ff00'

View File

@@ -54,7 +54,7 @@ job "prometheus" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.prometheus.entryPoints=websecure", "traefik.http.routers.prometheus.entryPoints=websecure",
"traefik.http.routers.prometheus.middlewares=authentik@file", "traefik.http.routers.prometheus.middlewares=oidc-auth@file",
] ]
check { check {

View File

@@ -34,7 +34,7 @@ job "traefik" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.api.entryPoints=websecure", "traefik.http.routers.api.entryPoints=websecure",
"traefik.http.routers.api.middlewares=authentik@file", "traefik.http.routers.api.middlewares=oidc-auth@file",
"traefik.http.routers.api.rule=Host(`traefik.v.paler.net`)", "traefik.http.routers.api.rule=Host(`traefik.v.paler.net`)",
"traefik.http.routers.api.service=api@internal", "traefik.http.routers.api.service=api@internal",
] ]
@@ -63,6 +63,7 @@ job "traefik" {
volumes = [ volumes = [
"local/traefik.yml:/etc/traefik/traefik.yml", "local/traefik.yml:/etc/traefik/traefik.yml",
"/data/services/traefik:/config", "/data/services/traefik:/config",
"/data/services/traefik/plugins-storage:/plugins-storage",
] ]
} }
@@ -75,6 +76,12 @@ global:
#log: #log:
# level: debug # level: debug
experimental:
plugins:
traefik-oidc-auth:
moduleName: "github.com/sevensolutions/traefik-oidc-auth"
version: "v0.16.0"
api: api:
dashboard: true dashboard: true

View File

@@ -69,7 +69,7 @@ job "unifi" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.unifi.entryPoints=websecure", "traefik.http.routers.unifi.entryPoints=websecure",
"traefik.http.routers.unifi.middlewares=authentik@file", "traefik.http.routers.unifi.middlewares=oidc-auth@file",
"traefik.http.services.unifi.loadbalancer.server.scheme=https", "traefik.http.services.unifi.loadbalancer.server.scheme=https",
] ]
} }

View File

@@ -39,7 +39,7 @@ job "urbit" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.urbit.entryPoints=websecure", "traefik.http.routers.urbit.entryPoints=websecure",
"traefik.http.routers.urbit.middlewares=authentik@file", "traefik.http.routers.urbit.middlewares=oidc-auth@file",
] ]
} }

View File

@@ -73,7 +73,7 @@ EOH
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.webodm.entryPoints=websecure", "traefik.http.routers.webodm.entryPoints=websecure",
"traefik.http.routers.webodm.middlewares=authentik@file", "traefik.http.routers.webodm.middlewares=oidc-auth@file",
] ]
} }
} }
@@ -97,7 +97,7 @@ EOH
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.clusterodm.entryPoints=websecure", "traefik.http.routers.clusterodm.entryPoints=websecure",
"traefik.http.routers.clusterodm.middlewares=authentik@file", "traefik.http.routers.clusterodm.middlewares=oidc-auth@file",
] ]
} }

View File

@@ -22,7 +22,7 @@ job "whoami" {
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.whoami.rule=Host(`test.alo.land`)", "traefik.http.routers.whoami.rule=Host(`test.alo.land`)",
"traefik.http.routers.whoami.entryPoints=websecure", "traefik.http.routers.whoami.entryPoints=websecure",
"traefik.http.routers.whoami.middlewares=authentik@file", "traefik.http.routers.whoami.middlewares=oidc-auth@file",
] ]
} }
} }

View File

@@ -36,7 +36,7 @@ job "wiki" {
"--listen", "--listen",
"host=0.0.0.0", "host=0.0.0.0",
"port=${NOMAD_PORT_captainslog}", "port=${NOMAD_PORT_captainslog}",
"authenticated-user-header=X-authentik-username", "authenticated-user-header=X-Oidc-Username",
"readers=ppetru", "readers=ppetru",
"writers=ppetru", "writers=ppetru",
"admin=ppetru", "admin=ppetru",
@@ -64,7 +64,7 @@ job "wiki" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.captainslog.entryPoints=websecure", "traefik.http.routers.captainslog.entryPoints=websecure",
"traefik.http.routers.captainslog.middlewares=authentik@file", "traefik.http.routers.captainslog.middlewares=oidc-auth@file",
] ]
} }
@@ -85,7 +85,7 @@ job "wiki" {
"--listen", "--listen",
"host=0.0.0.0", "host=0.0.0.0",
"port=${NOMAD_PORT_alo}", "port=${NOMAD_PORT_alo}",
"authenticated-user-header=X-authentik-username", "authenticated-user-header=X-Oidc-Username",
"readers=ppetru,ines", "readers=ppetru,ines",
"writers=ppetru,ines", "writers=ppetru,ines",
"admin=ppetru", "admin=ppetru",
@@ -112,7 +112,7 @@ job "wiki" {
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.alowiki.rule=Host(`wiki.alo.land`)", "traefik.http.routers.alowiki.rule=Host(`wiki.alo.land`)",
"traefik.http.routers.alowiki.entryPoints=websecure", "traefik.http.routers.alowiki.entryPoints=websecure",
"traefik.http.routers.alowiki.middlewares=authentik@file", "traefik.http.routers.alowiki.middlewares=oidc-auth@file",
] ]
} }
@@ -133,7 +133,7 @@ job "wiki" {
"--listen", "--listen",
"host=0.0.0.0", "host=0.0.0.0",
"port=${NOMAD_PORT_pispace}", "port=${NOMAD_PORT_pispace}",
"authenticated-user-header=X-authentik-username", "authenticated-user-header=X-Oidc-Username",
"readers=ppetru,ines", "readers=ppetru,ines",
"writers=ppetru,ines", "writers=ppetru,ines",
"admin=ppetru", "admin=ppetru",
@@ -160,7 +160,7 @@ job "wiki" {
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.pispace.rule=Host(`pi.paler.net`)", "traefik.http.routers.pispace.rule=Host(`pi.paler.net`)",
"traefik.http.routers.pispace.entryPoints=websecure", "traefik.http.routers.pispace.entryPoints=websecure",
"traefik.http.routers.pispace.middlewares=authentik@file", "traefik.http.routers.pispace.middlewares=oidc-auth@file",
] ]
} }
@@ -181,7 +181,7 @@ job "wiki" {
"--listen", "--listen",
"host=0.0.0.0", "host=0.0.0.0",
"port=${NOMAD_PORT_grok}", "port=${NOMAD_PORT_grok}",
"authenticated-user-header=X-authentik-username", "authenticated-user-header=X-Oidc-Username",
"readers=ppetru", "readers=ppetru",
"writers=ppetru", "writers=ppetru",
"admin=ppetru", "admin=ppetru",
@@ -207,7 +207,7 @@ job "wiki" {
tags = [ tags = [
"traefik.enable=true", "traefik.enable=true",
"traefik.http.routers.groktw.entryPoints=websecure", "traefik.http.routers.groktw.entryPoints=websecure",
"traefik.http.routers.groktw.middlewares=authentik@file", "traefik.http.routers.groktw.middlewares=oidc-auth@file",
] ]
} }