Compare commits

..

312 Commits

Author SHA1 Message Date
b63abca296 Add MAILGUN_URL for EU region
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 22:29:08 +00:00
1311aadffb Remove phaseflow-cron batch job
No longer needed - cron scheduling now handled by instrumentation.ts
inside the main phaseflow app.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 22:16:20 +00:00
f903ddeee5 Update phaseflow secrets for Mailgun email provider
Switch from resend_api_key to mailgun_api_key and mailgun_domain.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 19:37:45 +00:00
33f3ddd7e9 Update flake. 2026-01-20 18:17:22 +00:00
1cdedf824c Meta and more RAM. Still not working. 2026-01-20 18:17:11 +00:00
beb856714e Install beads. 2026-01-18 07:34:15 +00:00
fcb2067059 Sync later in the mornings. 2026-01-17 16:28:20 +00:00
cebd236b1f Update flake. 2026-01-16 12:46:13 +00:00
8cc818f6b2 Rename deprecated options. 2026-01-16 10:54:09 +00:00
305a7a5115 Remove unknown option. 2026-01-16 10:41:29 +00:00
526888cd26 Improve phaseflow-cron logging on failure
Show the API response body in logs instead of silently failing
with curl exit code 22.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 07:40:28 +00:00
8d97d09b07 Add phaseflow-cron job and PocketBase admin credentials
- New periodic job for daily Garmin sync at 6 AM
- Added pocketbase_admin_email and pocketbase_admin_password to
  secrets template for cron job authentication

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 07:13:36 +00:00
3f481e0a16 Set the right vars. 2026-01-11 21:22:11 +00:00
15dea7a249 Make PocketBase admin UI accessible. 2026-01-11 17:22:42 +00:00
e1bace9044 Fix phaseflow PocketBase URL to use Nomad address
Docker containers in Nomad don't share network namespace by default.
Use NOMAD_ADDR_pocketbase interpolation instead of localhost.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 17:06:40 +00:00
09f2d2b013 More RAM. 2026-01-11 13:12:09 +00:00
d195efdb0e Add phaseflow service
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 10:20:40 +00:00
3277c810a5 Update flake. 2026-01-10 20:04:51 +00:00
f2baf3daf6 Move to new domain. 2026-01-09 06:17:20 +00:00
931470ee0a Remove farmOS service
Egg harvest data (849 logs, 6704 eggs, Nov 2023 - Jan 2026) exported
to CSV before shutdown. Database and user dropped from PostgreSQL.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 13:31:57 +00:00
41b30788fe Update flake. 2026-01-08 13:03:48 +00:00
01ebff3596 Migrate to alo organization
Update all registry paths from ppetru/* to alo/* and workflow
references from ppetru/alo-cluster to alo/alo-cluster.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 10:49:38 +00:00
ed2c899915 Add reusable CI/CD workflow and documentation
- .gitea/workflows/deploy-nomad.yaml: Shared workflow for build/push/deploy
- docs/CICD_SETUP.md: Guide for adding CI/CD to new services
- nix-runner/README.md: Document the custom Nix runner image

Services can now use a 10-line workflow that calls the shared one:
  uses: ppetru/alo-cluster/.gitea/workflows/deploy-nomad.yaml@master

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 07:47:01 +00:00
c548ead4f7 Add CI/CD infrastructure for animaltrack
New services:
- animaltrack.hcl: Python app with health checks and auto_revert
- act-runner.hcl: Gitea Actions runner on Nomad

New infrastructure:
- nix-runner/: Custom Nix Docker image for CI with modern Nix,
  local cache (c3), and bundled tools (skopeo, jq, etc.)

Modified:
- gitea.hcl: Enable Gitea Actions

The CI workflow (in animaltrack repo) builds Docker images with Nix,
pushes to Gitea registry, and triggers Nomad deployments with
automatic rollback on health check failure.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 07:17:31 +00:00
3b8cd7b742 AI ideas. 2026-01-03 10:38:47 +00:00
d71408b567 Incorporate omarchy-nix. 2026-01-01 16:44:09 +00:00
a8147d9ae5 Fix deprecated pkgs.system usage
Replace pkgs.system with pkgs.stdenv.hostPlatform.system to fix
NixOS evaluation warning about the deprecated attribute.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 15:59:51 +00:00
2b1950d4e3 Install tools for Claude. 2026-01-01 13:34:17 +00:00
322927e2b0 Update env vars. 2026-01-01 13:34:08 +00:00
4cae9fe706 Update flake. 2025-12-26 14:26:40 +00:00
b5b164b543 Switch to docker. 2025-12-24 15:08:10 +00:00
08db384f60 Set up MCP for alo wiki. 2025-12-22 13:03:27 +00:00
3b2cd0c3cf Allow encoded slashes to make Tiddlywikis work again. 2025-12-22 12:37:05 +00:00
13a4467166 Use hostname for alo-cloud-1 so deploys go over tailscale. 2025-12-21 13:36:33 +00:00
4c0b0fb780 Update flake. 2025-12-21 13:33:21 +00:00
a09d1b49c2 Upgrade OIDC plugin to 0.17 2025-12-19 22:37:44 +00:00
8d381ef9f4 Update flake. 2025-12-14 18:20:46 +00:00
79d51c3f58 Upgrade to 25.11. 2025-12-14 18:17:40 +00:00
83fb796a9f Fix netconsole: disable before reconfiguring.
Configfs params can't be modified while the target is enabled.
Disable first if already enabled, then reconfigure and re-enable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-12 13:22:27 +00:00
4efc44e964 Fix netconsole: configure via configfs after network up.
The modprobe.conf approach failed because the network interface
doesn't exist when the module loads at boot. Now using a systemd
service to configure netconsole via configfs after network-online.

Also raise console_loglevel to 8 so all kernel messages (not just
KERN_WARNING and above) are sent to netconsole.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-12 12:08:12 +00:00
3970c60016 More RAM. 2025-12-12 10:26:54 +00:00
a8b63e71c8 Remove unavailable crash analysis packages.
The crash and makedumpfile packages don't exist in nixpkgs.
Kdump will still capture crash dumps to /var/crash.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-12 07:29:15 +00:00
58c851004d Add stability debugging for beefy lockups.
- Add netconsole receiver on zippy to capture kernel messages
- Configure beefy as netconsole sender to zippy (192.168.1.2)
- Enable kdump with 256M reserved memory for crash analysis
- Add lockup detectors (softlockup_panic, hung_task_panic, nmi_watchdog)
- Add consoleblank=300 for greeter display sleep
- Persist crash dumps and add analysis tools (crash, makedumpfile)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-12 07:23:33 +00:00
bd889902be Update flake. 2025-12-09 14:48:56 +00:00
7fd79c9911 Enable sysrq for debugging. 2025-12-06 12:25:17 +00:00
41eacfec02 Typo fix. 2025-12-02 20:39:25 +00:00
0a0748b920 Disable byte range locking for smbfs. 2025-12-02 20:38:48 +00:00
d6e0e09e87 Update flake. 2025-11-28 13:00:17 +00:00
61c3020a5e Update flake. 2025-11-25 18:53:43 +00:00
972b973f58 Update flake. 2025-11-25 14:05:10 +00:00
8c5a7b78c6 Update flake. 2025-11-24 13:33:04 +00:00
675204816a Even more RAM for Plex. 2025-11-23 20:10:58 +00:00
3bb82dbc6b Initial config. 2025-11-23 08:55:38 +00:00
0f6233c3ec More RAM. 2025-11-23 07:54:04 +00:00
43fa56bf35 Bind on all addresses and rely on firewall for blocking public ssh.
Otherwise, sshd will try and fail to bind on the tailscale IP before
tailscale is up.
2025-11-23 07:24:09 +00:00
50c930eeaf Add flaresolverr, disable bazarr, tweak resources. 2025-11-22 19:27:37 +00:00
8dde15b8ef Add prowlarr, recyclarr, and jellyseerr. 2025-11-22 17:32:14 +00:00
6100d8dc69 Fix override. 2025-11-21 16:43:39 +00:00
a92f0fcb28 Tighten up security. 2025-11-21 16:39:45 +00:00
bd4604cdcc Auth docs. 2025-11-21 14:12:19 +00:00
31db372b43 Remove now unused authentik config. 2025-11-21 14:00:47 +00:00
360e776745 Set up ollama. 2025-11-17 22:33:44 +00:00
5a819f70bb Static port for claude code accessibility. 2025-11-17 19:05:17 +00:00
b2c055ffb2 MCP server for tiddlywiki. 2025-11-17 17:56:05 +00:00
6e0b34843b Allow claude-code to read&write. 2025-11-16 21:07:58 +00:00
e8485e3bb7 Update flake. 2025-11-12 15:10:11 +00:00
e8cd970960 Make it an exit node. 2025-11-05 16:50:05 +00:00
78b59cec4f Put PHP port back on 9000, where the rest of the stuff expects it. 2025-11-05 15:54:46 +00:00
e6d40a9f7e Set an actual password. 2025-11-04 20:26:50 +00:00
7733a1be46 yet another replication fix. 2025-11-04 19:57:52 +00:00
a5df98bc5a Update docs. 2025-11-04 19:08:27 +00:00
fb9b0dd2f5 Move NFS server to sparky. 2025-11-04 19:00:18 +00:00
0dc214069c Fix curl-induced failures. 2025-11-04 18:59:50 +00:00
a6c4be9530 Use clone source for btrfs send. 2025-11-04 17:51:34 +00:00
6e338e6d65 Stop replicating to c1. 2025-11-04 14:03:49 +00:00
41f16fa0b8 Make sparky a standby again. 2025-11-04 12:58:34 +00:00
1b05728817 Switch to Pocket ID. 2025-11-04 12:58:15 +00:00
520a417316 Pocket ID config. 2025-11-04 11:04:33 +00:00
88ed5360ca Keys for sparky reinstall. 2025-11-04 11:04:20 +00:00
392d40def3 Update flake. 2025-11-04 10:26:18 +00:00
5ef4d832fb Only keep 10 snapshots, and push metrics. 2025-11-04 10:22:11 +00:00
49afc0c084 Remove standby from sparky. 2025-11-04 09:39:45 +00:00
b2c82ceaa8 Don't replicate to sparky for now. 2025-11-04 09:39:23 +00:00
b9286d7243 More CPU. 2025-11-02 06:50:38 +00:00
22931e6747 Add some items. 2025-11-01 17:55:40 +00:00
ac030018c6 Install prusa slicer. 2025-10-31 17:54:27 +00:00
7386d3a5ee Don't try to run consul on the cloud. 2025-10-31 15:55:37 +00:00
2a5a9f2ee9 Actually make sparky a NFS replica. 2025-10-31 15:54:32 +00:00
963a7c10fa Fix include. 2025-10-31 15:45:32 +00:00
283cf9d614 Make sparky a NFS backup instead of desktop. 2025-10-31 15:41:12 +00:00
5b3b4ea2ed Make sure to keep some snapshots around even if they stop coming. 2025-10-31 15:40:19 +00:00
5a9d5de5c4 (try to) show better diffs 2025-10-31 15:40:08 +00:00
a5e3f613c2 Set correct interface name for beefy. 2025-10-30 07:46:37 +00:00
8b8fac2d89 Try to fix systemd pager errors. 2025-10-30 07:37:21 +00:00
31d79ba75b Typo fix. 2025-10-30 07:28:32 +00:00
6faf148fde Don't try to use the RSA SSH key, not supported by sops. 2025-10-30 07:24:48 +00:00
e88f1c93c5 Another attempt at thoroughly fixing tmux ssh agent. 2025-10-30 07:21:40 +00:00
51375db1e4 Passphrase from beefy. 2025-10-30 06:29:49 +00:00
9415a8ece2 Make ssh agent settings autoheal in tmux. 2025-10-29 20:55:46 +00:00
da85ee776d Post-install beefy updates. 2025-10-29 17:25:43 +00:00
e23dc7df5b Configs for beefy. 2025-10-29 17:13:23 +00:00
163b9e4c22 Fix ghostty terminfo on remote hosts. 2025-10-29 15:17:46 +00:00
d521c3b013 Fix WiFi for stinky. 2025-10-29 15:17:46 +00:00
d123400ea9 Less CPU. 2025-10-28 19:50:03 +00:00
9c64a8ec00 Fix ghostty termcap. 2025-10-28 19:06:47 +00:00
4907238726 stinky wifi 2025-10-28 17:25:15 +00:00
37aad7d951 More tmpfs impermanence fixes. 2025-10-28 16:49:39 +00:00
ac34f029ed Update flake. 2025-10-28 15:55:30 +00:00
8d04add7dc Remove code server. 2025-10-28 15:44:09 +00:00
d7a07cebf5 Cleanup old snapshots hourly. 2025-10-28 14:40:28 +00:00
2ba961bfa8 TS key. 2025-10-28 11:35:49 +00:00
765e92f9c7 Keys for stinky. 2025-10-28 11:30:57 +00:00
1bb202d017 Add nixos-hardware flake for stinky. 2025-10-28 10:59:16 +00:00
98769f59d6 Fix stinky build. 2025-10-27 16:17:26 +00:00
762037d17f (untested) config for stinky and diff script. 2025-10-27 12:21:57 +00:00
32a22c783d Set resource limits for user sessions. 2025-10-25 19:08:40 +01:00
8c29c18287 Fix. 2025-10-25 17:54:23 +01:00
092a8b3658 Only install memtest on x86 machines. 2025-10-25 17:53:56 +01:00
c7ff79d0c3 Stop doing home manager impermanence. 2025-10-25 17:44:26 +01:00
ac51f50ef5 Update keys for c2 again. 2025-10-25 16:12:07 +01:00
c5347b6eba Tailscale key for new c2. 2025-10-25 16:10:00 +01:00
d4525313bb Reinstall c2 after failed disk. 2025-10-25 15:58:33 +01:00
92a27ac92b Add memtest86 boot menu entry. 2025-10-25 14:06:00 +01:00
fabfeea1c2 Set browser to Chrome. 2025-10-25 14:05:51 +01:00
5ce0e0e1df Only install omarchy on desktop machines. 2025-10-25 11:45:41 +01:00
bd473d1ad2 Update key for sparky. 2025-10-25 11:32:56 +01:00
064d227344 Install omarchy-nix on sparky. 2025-10-25 11:32:56 +01:00
dd8fee0ecb Reduce NFS snapshot retention time to save disk space. 2025-10-25 11:32:13 +01:00
a2b54be875 Remove glusterfs references. 2025-10-25 08:51:50 +01:00
ccf6154ba0 Remove glusterfs. 2025-10-25 08:51:29 +01:00
bd5988dfbc Profiles only for home manager. 2025-10-25 08:34:21 +01:00
a57fc9107b Make sure DNS is up before mounting NFS. 2025-10-24 22:49:32 +01:00
a7dce7cfb9 Persist ~/.claude.json and ~/.cache everywhere 2025-10-24 22:30:16 +01:00
b608e110c9 Scale monitor to 150%. 2025-10-24 17:32:42 +01:00
78dee346e9 Scale up monitor. 2025-10-24 17:26:22 +01:00
66f26842c9 Forgotten update for new sparky. 2025-10-24 17:26:03 +01:00
9c504e0278 Disable fish sponge for now, keeps dropping stuff. 2025-10-24 17:22:47 +01:00
4035d38ab2 New keys for sparky reinstall. 2025-10-24 17:15:02 +01:00
53ef2f6293 Refactor common modules. 2025-10-24 15:34:31 +01:00
e5cd9bd98e Enable password login through ssh. 2025-10-24 15:19:35 +01:00
0b51b44856 Remove deprecated file. 2025-10-24 15:06:18 +01:00
f918ff5df2 Try to make cloud hosts work. 2025-10-24 14:58:00 +01:00
4921679140 Make impermanence reset work on unencrypted hosts. 2025-10-24 14:49:32 +01:00
ce7b3bbe16 Update install docs to preserve installer ssh keys. 2025-10-24 14:47:45 +01:00
cf2210ec77 Another attempt at fixing the NFS race. 2025-10-24 13:59:39 +01:00
1dc219d08f Install killall everywhere. 2025-10-24 11:56:47 +01:00
b7ef5f89b7 Fix NFS automount race. 2025-10-24 11:36:01 +01:00
974d10cbe2 Add todo list. 2025-10-24 07:21:31 +01:00
efb677fd00 Add key for c3 ncps 2025-10-23 22:34:48 +01:00
7eb11d7573 Switch to ncps. 2025-10-23 22:27:41 +01:00
53ecddb7aa Update flake. 2025-10-23 21:59:46 +01:00
94f71cc62e Setup binary cache on c3 and optimize nix settings. 2025-10-23 21:59:08 +01:00
58bb710cb9 Migrate host volumes to NFS & consolidate. 2025-10-23 21:58:44 +01:00
854f663fb0 Update flake and NixOS to 25.05 2025-10-23 21:38:13 +01:00
376b3cd7e4 Remove old config. 2025-10-23 21:22:27 +01:00
5d0880a789 Migrate another batch of services to NFS. 2025-10-23 21:20:11 +01:00
09603daf80 Update docs. 2025-10-23 20:52:31 +01:00
bbd072abf2 Migrate to NFS share. 2025-10-23 17:27:06 +01:00
75c60b29e8 Delete now unused Ghost config. 2025-10-23 17:26:54 +01:00
ef22227ca8 Forgotten comma. 2025-10-22 17:11:23 +01:00
8100aa7070 Wayland tweaks for size. 2025-10-22 17:10:00 +01:00
fe2c866115 Chrome instead of chromium for desktops. 2025-10-22 16:53:54 +01:00
35f68fb6e8 Cleanup syncthing reference. 2025-10-22 16:38:44 +01:00
f8aee0d438 Move wordpress to NFS.
This removes the need for the syncthing and rysnc plumbing.
2025-10-22 15:01:01 +01:00
2437d46aa9 Move unifi to zippy. 2025-10-22 14:51:39 +01:00
d16ffd9c65 Upgrade to 2025.6. 2025-10-22 14:51:28 +01:00
49f159e2a6 Move loki to zippy. 2025-10-22 14:39:13 +01:00
17c0f2db2a Move prometheus to zippy. 2025-10-22 14:29:57 +01:00
c80a2c9a58 Remove unused leantime config. 2025-10-22 14:23:39 +01:00
706f46ae77 And another replication fix. 2025-10-22 14:22:39 +01:00
fa603e8aea Move clickhouse to zippy. 2025-10-22 14:19:50 +01:00
8032ad4d20 Move redis to zippy. 2025-10-22 14:11:37 +01:00
8ce5194ca9 YA replication fix. 2025-10-22 14:08:28 +01:00
a948f26ffb Move postgres to zippy. 2025-10-22 14:05:45 +01:00
f414ac0146 Fix path names. 2025-10-22 13:59:31 +01:00
17711da0b6 Fix replication again. 2025-10-22 13:59:25 +01:00
ed06f07116 More docs. 2025-10-22 13:50:03 +01:00
bffc09cbd6 Ignore NFS primary/standby snapshots for backup. 2025-10-22 13:45:03 +01:00
f488b710bf Fix incremental snapshot logic. 2025-10-22 13:41:28 +01:00
65835e1ed0 Run mysql on the primary storage machine. 2025-10-22 13:20:13 +01:00
967ff34a51 NFS server and client setup. 2025-10-22 13:06:21 +01:00
1262e03e21 Cluster changes writeup. 2025-10-21 00:05:44 +01:00
a949446d83 Clarify host roles. 2025-10-20 22:29:57 +01:00
99db96e449 Refactor. 2025-10-20 22:27:58 +01:00
fe51f1ac5b Fix breakage. 2025-10-20 17:29:26 +01:00
a1089a7cac Solarized dark colors on text consoles. 2025-10-20 17:14:21 +01:00
4d6d2b4d6f Hyprland tweaks to make it usable. 2025-10-20 17:14:03 +01:00
9d5a7994eb Don't install python env on servers. 2025-10-20 16:40:52 +01:00
1465213c90 Refactor home manager, and add desktop node on sparky. 2025-10-20 16:27:13 +01:00
bd15987f8d Replace workshop user key. 2025-10-19 20:29:18 +01:00
438d9a44d4 Fix key path. 2025-10-19 20:29:08 +01:00
19ba8e3286 Fix hostname for sparky. 2025-10-19 20:23:13 +01:00
0b17a32da5 Configs for sparky. 2025-10-19 20:15:56 +01:00
7cecf5bea6 Update flake. 2025-10-08 06:15:45 +01:00
28887b671a Update flake. 2025-10-07 06:22:29 +01:00
e23a791e61 Update flake. 2025-10-06 13:57:25 +01:00
8fde9b0e7c Upgrade to 25.9 2025-10-06 13:41:19 +01:00
c7a53a66a9 Update flake. 2025-10-06 13:36:29 +01:00
cc040ed876 code-server setup 2025-09-28 16:54:03 +01:00
0b0ba486a5 Upgrade to 9.4 2025-09-27 17:14:26 +01:00
f0fcea7645 Reduce RAM. 2025-09-27 17:13:09 +01:00
d4d8370682 Reduce CPU & RAM. 2025-09-27 17:12:18 +01:00
976f3f53c4 Update flake. 2025-09-26 18:12:55 +01:00
46a896cad4 Persist direnv state. 2025-09-26 18:12:48 +01:00
5391385a68 Persist Claude and Codex state. 2025-09-26 17:05:42 +01:00
e37b64036c Install Codex. 2025-09-22 14:22:59 +01:00
e26e481152 Update flake. 2025-09-22 14:22:44 +01:00
b9793835d4 Update flake. 2025-09-16 14:53:45 +01:00
64e9059a77 Match output dir. 2025-09-14 08:03:13 +01:00
a3b85f0088 Explicitly install chromium. 2025-09-09 06:47:05 +01:00
5cd32a1d93 Set playwright env. 2025-09-09 06:44:42 +01:00
dc2c1ecb00 Try to make playwright work. 2025-09-09 06:36:43 +01:00
fc3cefd1f0 Install playwright. 2025-09-09 06:30:58 +01:00
7daf285973 Update flake. 2025-09-09 06:16:50 +01:00
40ae35b255 Add service for velutrack. 2025-09-09 06:12:10 +01:00
dd186e0ebe Install Gemini CLI. 2025-09-09 06:11:55 +01:00
73ecc06845 Update mobile key. 2025-08-08 22:51:33 +01:00
567ed698fb Handle changes in upstream APIs. 2025-07-30 16:21:59 +01:00
f5b5ec9615 Update flake. 2025-07-30 14:10:34 +01:00
38db0f7207 Send wordpress.paler.net through varnish. 2025-07-30 14:03:46 +01:00
22921200a7 Move Lizmap to nomad. 2025-07-30 14:01:24 +01:00
d8ca3c27e2 QGIS 3.44 2025-07-06 14:05:29 +01:00
77299dd07a Unstable python, but stop installing aider. 2025-06-27 16:36:01 +01:00
b5339141df Stable python for home. 2025-06-27 16:34:26 +01:00
33bc772960 Update flake. 2025-06-27 16:28:12 +01:00
a7aa7e1946 claude-code is unfree, and only in unstable. 2025-05-29 08:28:47 +01:00
0e9e8c8bed Install claude-code. 2025-05-28 19:36:04 +01:00
05318c6255 Update flake. 2025-05-28 19:35:32 +01:00
aec74345d4 Fix directories and ports. 2025-05-19 17:34:07 +01:00
24ab04b098 2025.4 2025-05-19 17:33:10 +01:00
68a3339794 Removed ignored timeout and force uid. 2025-05-04 20:16:22 +01:00
9ef1cafc32 Update flake. 2025-05-04 19:37:37 +01:00
e4ca52b587 Media stack. 2025-05-04 16:51:35 +01:00
c5466d559d Update ID for c2. 2025-05-03 23:12:12 +01:00
c554069116 Post-reinstall updates for c2. 2025-05-03 22:35:31 +01:00
5cf9a110e8 Update flake. 2025-05-03 22:24:25 +01:00
61b0edb305 Replace failed disk. 2025-05-03 22:19:09 +01:00
1ca167d135 Make weather work again for alo.land 2025-04-27 15:15:45 +01:00
0098b66de3 Switch to the new unifi docker image. 2025-04-25 19:50:39 +01:00
11bf328239 Removed unused config. 2025-04-24 10:09:02 +01:00
2a7447088e Update flake. 2025-04-24 05:57:09 +01:00
2e84537a3f Upgrade to 25.4 2025-04-24 05:54:50 +01:00
e11cfdb1f8 WIP: try to make it show up on alo.land again 2025-04-24 05:51:31 +01:00
d8d73ed2d2 Upgrade to 9.3 2025-04-24 05:51:16 +01:00
8a56607163 Serve stale content if backend is down. 2025-04-22 06:04:15 +01:00
9b9f03fc20 Serve stale content if origin is down. 2025-04-21 20:29:02 +01:00
0dbf41d54c WIP 2025-04-21 20:28:54 +01:00
bded37656a Update flake. 2025-04-21 20:21:38 +01:00
d579d0b86b Add grok wiki. 2025-04-20 08:01:15 +01:00
c20c620198 Update flake. 2025-04-17 19:51:56 +01:00
046b6819fd qgis 3.42 2025-04-17 12:46:26 +01:00
27787f3a17 Move pispace to new wiki setup. 2025-04-17 11:01:42 +01:00
cd1f38229a alowiki -> wiki.alo.land
Move to generic folder name.
2025-04-17 10:41:29 +01:00
78b6a59160 Config for alo wiki. 2025-04-17 10:24:46 +01:00
5d744f394a Refactor for multi-wiki setup. 2025-04-17 10:18:50 +01:00
33b1981146 Update flake. 2025-04-14 20:35:10 +01:00
13c222f783 Initial config. 2025-04-11 22:26:27 +01:00
ad5cf2d44e Swap Ghost and Wordpress for alo.land 2025-04-06 18:03:54 +01:00
0c84c7fe4f 2025.2.3 2025-04-04 14:50:19 +01:00
a774bb6e3b 2024.12.4 2025-04-04 14:48:14 +01:00
f3f73a16aa Comment out ethereum stuff for now. 2025-04-04 14:46:52 +01:00
e140055ef3 WIP: lighthouse setup on zippy. 2025-04-04 11:49:05 +01:00
5367582155 chore: Ignore aider generated files 2025-04-04 09:38:03 +01:00
da0b60c2e1 Update flake. 2025-04-03 19:42:34 +01:00
1e6a246f5b Set up LLM and Aider.
Update flake deps.
2025-04-02 13:43:04 +01:00
82b4eabaa3 Actually use the correct MAC. 2025-03-29 10:33:35 +00:00
31411957ca Switch to real hassos MAC. 2025-03-29 10:24:58 +00:00
70c374c61a Update flake. 2025-03-29 09:24:49 +00:00
90bd3868a5 Working QEMU setup for hassos. 2025-03-29 06:38:17 +00:00
c29ef84d5e Add uuid. 2025-03-22 06:13:51 +00:00
a34713ec4b Disable LLMNR. 2025-03-20 06:59:49 +00:00
8454428f89 Switch to systemd-networkd for chilly.
It should hopefully fix the issue where network doesn't come back after
a system config switch.
2025-03-20 06:32:44 +00:00
4e1068ecbd Remove obsolete override. 2025-03-13 18:02:11 +00:00
585472e457 Update deps. 2025-03-13 14:57:45 +00:00
82f1f556e6 Upgrade & try to set a limited retention period. 2025-03-13 14:34:59 +00:00
46edfc5a29 Drop retention time further to reduce disk usage. 2025-03-13 14:22:49 +00:00
856118c5fe Fix secret names. 2025-03-13 08:12:52 +00:00
ae95ee7ca6 Fix per-host key config and set kopia passwords everywhere. 2025-03-12 11:56:26 +00:00
ae25e2e74d Fix c1 syncthing ID. 2025-03-12 10:38:53 +00:00
87d915d012 Kopia backup service for /persist. 2025-03-12 10:35:15 +00:00
b294dd2851 WIP: per-machine kopia secrets.
Cleanup unused kopia VM config.
2025-03-11 20:35:10 +00:00
6165d4a2af WIP: kopia backup script 2025-03-11 10:18:24 +00:00
bbdb2bf1ff Don't need deno anymore. 2025-03-10 18:38:03 +00:00
d85cd66cf3 Workaround Paymattic sillyness. 2025-03-10 15:05:20 +00:00
38186cdbb7 Upgrade to 9.2 2025-03-10 10:07:11 +00:00
a9fa588f1a More RAM. 2025-03-10 09:59:49 +00:00
32c708b980 Add temporary name for alo.land -> WP migration. 2025-03-09 21:46:38 +00:00
b0093eeec6 Move from TiddlyPWA to core tiddlywiki multiwikiserver 2025-03-09 15:47:44 +00:00
1c13a4f0e8 Re-enable auth and load alo wiki by default. 2025-03-08 19:35:06 +00:00
77ef777a3f TiddlyPWA setup. 2025-03-08 17:48:42 +00:00
611862d5e9 Resolve current system path at runtime for exec driver jobs. 2025-03-08 16:21:44 +00:00
7470a0e077 More RAM. 2025-03-08 04:48:41 +00:00
ee55ecb1eb Make default user auth insecure again. 2025-03-08 04:46:30 +00:00
f3307c8fdc Update to 25.2. 2025-03-08 04:42:19 +00:00
59c6fbbe62 Update flake. 2025-02-19 11:40:23 +00:00
791d5e66ae Update key for c1. 2025-02-02 14:03:07 +00:00
cb6b27f00c Reinstall c1 after failed disk. 2025-02-02 12:43:54 +00:00
3c3e96dc72 Upgrade. 2025-02-02 11:59:37 +00:00
f705164006 Upgrade to NixOS 24.11. 2025-01-27 15:49:08 +00:00
5658ffc15d Move github token to global nix options. 2025-01-19 14:42:46 +00:00
cd700ca5b5 Explicitly install nix so we can set options for it. 2025-01-19 14:35:25 +00:00
bfbbf7d9fa Install ipython. 2025-01-19 14:34:18 +00:00
eaa86ca1f2 Add github token for nix. 2025-01-19 14:32:48 +00:00
7212aa64f1 Change AGE keypair, lost previous key. 2025-01-19 14:28:30 +00:00
84ea544d41 UUID tag. 2025-01-03 09:07:36 +00:00
9120d91f7e More RAM for worker. 2025-01-03 09:07:25 +00:00
d4a784f362 Upgrade to 2024.12.1. 2025-01-03 08:55:08 +00:00
fe5b917480 Update to QGIS 3.40. 2024-11-13 13:26:20 +00:00
faeaaf2c97 Keep traefik backend latency histogram. 2024-11-09 09:47:36 +00:00
cb54d21c18 Further relax health check settings. 2024-11-09 07:00:12 +00:00
de6bcc9f4a Enable refresh tokens. 2024-11-08 08:12:16 +00:00
5ae1c217fe Relax health check and restart parameters. 2024-11-08 07:45:21 +00:00
171 changed files with 9708 additions and 1518 deletions

1
.envrc Normal file
View File

@@ -0,0 +1 @@
use flake

View File

@@ -0,0 +1,96 @@
# ABOUTME: Reusable workflow for building Nix Docker images and deploying to Nomad.
# ABOUTME: Called by service repos with: uses: alo/alo-cluster/.gitea/workflows/deploy-nomad.yaml@master
name: Deploy to Nomad
on:
workflow_call:
inputs:
service_name:
required: true
type: string
description: "Nomad job name (must match job ID in services/*.hcl)"
flake_output:
required: false
type: string
default: "dockerImage"
description: "Flake output to build (default: dockerImage)"
registry:
required: false
type: string
default: "gitea.v.paler.net"
description: "Container registry hostname"
secrets:
REGISTRY_USERNAME:
required: true
REGISTRY_PASSWORD:
required: true
NOMAD_ADDR:
required: true
jobs:
build-and-deploy:
runs-on: nix
steps:
- uses: actions/checkout@v4
- name: Build Docker image
run: |
echo "Building .#${{ inputs.flake_output }}..."
nix build ".#${{ inputs.flake_output }}" --out-link result
- name: Push to registry
run: |
echo "Pushing to ${{ inputs.registry }}/alo/${{ inputs.service_name }}:latest..."
skopeo copy \
--dest-creds "${{ secrets.REGISTRY_USERNAME }}:${{ secrets.REGISTRY_PASSWORD }}" \
--insecure-policy \
docker-archive:result \
"docker://${{ inputs.registry }}/alo/${{ inputs.service_name }}:latest"
- name: Deploy to Nomad
env:
NOMAD_ADDR: ${{ secrets.NOMAD_ADDR }}
SERVICE: ${{ inputs.service_name }}
run: |
echo "Deploying $SERVICE to Nomad..."
# Fetch current job, update UUID to force deployment
JOB=$(curl -sS "$NOMAD_ADDR/v1/job/$SERVICE")
NEW_UUID=$(cat /proc/sys/kernel/random/uuid)
echo "New deployment UUID: $NEW_UUID"
UPDATED_JOB=$(echo "$JOB" | jq --arg uuid "$NEW_UUID" '.Meta.uuid = $uuid')
# Submit updated job
RESULT=$(echo "{\"Job\": $UPDATED_JOB}" | curl -sS -X POST "$NOMAD_ADDR/v1/jobs" \
-H "Content-Type: application/json" -d @-)
echo "Submit result: $RESULT"
# Monitor deployment
sleep 3
DEPLOY_ID=$(curl -sS "$NOMAD_ADDR/v1/job/$SERVICE/deployments" | jq -r '.[0].ID')
echo "Deployment ID: $DEPLOY_ID"
if [ "$DEPLOY_ID" = "null" ]; then
echo "ERROR: No deployment created. Ensure job has 'update' stanza with 'auto_revert = true'"
exit 1
fi
echo "Monitoring deployment..."
for i in $(seq 1 30); do
STATUS=$(curl -sS "$NOMAD_ADDR/v1/deployment/$DEPLOY_ID" | jq -r '.Status')
echo "[$i/30] Deployment status: $STATUS"
case $STATUS in
successful)
echo "Deployment successful!"
exit 0
;;
failed|cancelled)
echo "Deployment failed or cancelled"
exit 1
;;
esac
sleep 10
done
echo "Timeout waiting for deployment"
exit 1

3
.gitignore vendored
View File

@@ -1,3 +1,6 @@
*.swp
.tmp
result
.aider*
.claude
.direnv/

View File

@@ -1,20 +1,76 @@
keys:
- &admin_ppetru age1kgkmean5tc0uwl4y8hpknfa2d7g5hka30gzrdnje9n6z2r733upqds0s4l
- &admin_ppetru age1df9ukkmg9yn9cjeheq9m6wspa420su8qarmq570rdvf2de3rl38saqauwn
- &server_zippy age1gtyw202hd07hddac9886as2cs8pm07e4exlnrgfm72lync75ng9qc5fjac
- &server_chilly age16yqffw4yl5jqvsr7tyd883vn98zw0attuv9g5snc329juff6dy3qw2w5wp
- &server_sparky age14aml5s3sxksa8qthnt6apl3pu6egxyn0cz7pdzzvp2yl6wncad0q56udyj
- &server_stinky age1me78u46409q9ez6fj0qanrfffc5e9kuq7n7uuvlljfwwc2mdaezqmyzxhx
- &server_beefy age1cs8uqj243lspyp042ueu5aes4t3azgyuaxl9au70ggrl2meulq4sgqpc7y
- &server_alo_cloud_1 age1w5w4wfvtul3sge9mt205zvrkjaeh3qs9gsxhmq7df2g4dztnvv6qylup8z
- &server_c1 age1e7ejamlagumpgjw56h82e9rsz2aplgzmll4np073a9lyvxw2gauqswpqwl
- &server_c2 age1gekmz8kc8r2lc2x6d4u63s2lnpmres4hu9wulxh29ch74ud7wfksq56xam
- &server_c1 age1wwufz86tm3auxn6pn27c47s8rvu7en58rk00nghtaxsdpw0gya6qj6qxdt
- &server_c2 age1jy7pe4530s8w904wtvrmpxvteztqy5ewdt92a7y3lq87sg9jce5qxxuydt
- &server_c3 age1zjgqu3zks5kvlw6hvy6ytyygq7n25lu0uj2435zlf30smpxuy4hshpmfer
creation_rules:
- path_regex: secrets/[^/]+\.(yaml|json|env|ini)$
- path_regex: secrets/common\.yaml
key_groups:
- age:
- *admin_ppetru
- *server_zippy
- *server_chilly
- *server_sparky
- *server_stinky
- *server_beefy
- *server_alo_cloud_1
- *server_c1
- *server_c2
- *server_c3
- path_regex: secrets/zippy\.yaml
key_groups:
- age:
- *admin_ppetru
- *server_zippy
- path_regex: secrets/chilly\.yaml
key_groups:
- age:
- *admin_ppetru
- *server_chilly
- path_regex: secrets/sparky\.yaml
key_groups:
- age:
- *admin_ppetru
- *server_sparky
- path_regex: secrets/stinky\.yaml
key_groups:
- age:
- *admin_ppetru
- *server_stinky
- path_regex: secrets/beefy\.yaml
key_groups:
- age:
- *admin_ppetru
- *server_beefy
- path_regex: secrets/wifi\.yaml
key_groups:
- age:
- *admin_ppetru
- *server_stinky
- path_regex: secrets/alo-cloud-1\.yaml
key_groups:
- age:
- *admin_ppetru
- *server_alo_cloud_1
- path_regex: secrets/c1\.yaml
key_groups:
- age:
- *admin_ppetru
- *server_c1
- path_regex: secrets/c2\.yaml
key_groups:
- age:
- *admin_ppetru
- *server_c2
- path_regex: secrets/c3\.yaml
key_groups:
- age:
- *admin_ppetru
- *server_c3

116
CLAUDE.md Normal file
View File

@@ -0,0 +1,116 @@
# Claude Code Quick Reference
NixOS cluster configuration using flakes. Homelab infrastructure with Nomad/Consul orchestration.
## Project Structure
```
├── common/
│ ├── global/ # Applied to all hosts (backup, sops, users, etc.)
│ ├── minimal-node.nix # Base (ssh, user, boot, impermanence)
│ ├── cluster-member.nix # Consul agent + storage mounts (NFS/CIFS)
│ ├── nomad-worker.nix # Nomad client (runs jobs) + Docker + NFS deps
│ ├── nomad-server.nix # Enables Consul + Nomad server mode
│ ├── cluster-tools.nix # Just CLI tools (nomad, wander, damon)
│ ├── workstation-node.nix # Dev tools (wget, deploy-rs, docker, nix-ld)
│ ├── desktop-node.nix # Hyprland + GUI environment
│ ├── nfs-services-server.nix # NFS server + btrfs replication
│ └── nfs-services-standby.nix # NFS standby + receive replication
├── hosts/ # Host configs - check imports for roles
├── docs/
│ ├── CLUSTER_REVAMP.md # Master plan for architecture changes
│ ├── MIGRATION_TODO.md # Tracking checklist for migration
│ ├── NFS_FAILOVER.md # NFS failover procedures
│ └── AUTH_SETUP.md # Authentication (Pocket ID + Traefik OIDC)
└── services/ # Nomad job specs (.hcl files)
```
## Current Architecture
### Storage Mounts
- `/data/services` - NFS from `data-services.service.consul` (check nfs-services-server.nix for primary)
- `/data/media` - CIFS from fractal
- `/data/shared` - CIFS from fractal
### Cluster Roles (check hosts/*/default.nix for each host's imports)
- **Quorum**: hosts importing `nomad-server.nix` (3 expected for consensus)
- **Workers**: hosts importing `nomad-worker.nix` (run Nomad jobs)
- **NFS server**: host importing `nfs-services-server.nix` (affinity for direct disk access like DBs)
- **Standby**: hosts importing `nfs-services-standby.nix` (receive replication)
## Config Architecture
**Modular role-based configs** (compose as needed):
- `minimal-node.nix` - Base for all systems (SSH, user, boot, impermanence)
- `cluster-member.nix` - Consul agent + shared storage mounts (no Nomad)
- `nomad-worker.nix` - Nomad client to run jobs (requires cluster-member)
- `nomad-server.nix` - Enables Consul + Nomad server mode (for quorum members)
- `cluster-tools.nix` - Just CLI tools (no services)
**Machine type configs** (via flake profile):
- `workstation-node.nix` - Dev tools (deploy-rs, docker, nix-ld, emulation)
- `desktop-node.nix` - Extends workstation + Hyprland/GUI
**Composition patterns**:
- Quorum member: `cluster-member + nomad-worker + nomad-server`
- Worker only: `cluster-member + nomad-worker`
- CLI only: `cluster-member + cluster-tools` (Consul agent, no Nomad service)
- NFS primary: `cluster-member + nomad-worker + nfs-services-server`
- Standalone: `minimal-node` only (no cluster membership)
**Key insight**: Profiles (workstation/desktop) don't imply cluster roles. Check imports for actual roles.
## Key Patterns
**NFS Server/Standby**:
- Primary: imports `nfs-services-server.nix`, sets `standbys = [...]`
- Standby: imports `nfs-services-standby.nix`, sets `replicationKeys = [...]`
- Replication: btrfs send/receive every 5min, incremental with fallback to full
- Check host configs for current primary/standby assignments
**Backups**:
- Kopia client on all nodes → Kopia server on fractal
- Backs up `/persist` hourly via btrfs snapshot
- Excludes: `services@*` and `services-standby/services@*` (replication snapshots)
**Secrets**:
- SOPS for secrets, files in `secrets/`
- Keys managed per-host
**Authentication**:
- Pocket ID (OIDC provider) at `pocket-id.v.paler.net`
- Traefik uses `traefik-oidc-auth` plugin for SSO
- Services add `middlewares=oidc-auth@file` tag to protect
- See `docs/AUTH_SETUP.md` for details
## Migration Status
**Phase 3 & 4**: COMPLETE! GlusterFS removed, all services on NFS
**Next**: Convert fractal to NixOS (deferred)
See `docs/MIGRATION_TODO.md` for detailed checklist.
## Common Tasks
**Deploy a host**: `deploy -s '.#hostname'`
**Deploy all**: `deploy`
**Check replication**: Check NFS primary host, then `ssh <primary> journalctl -u replicate-services-to-*.service -f`
**NFS failover**: See `docs/NFS_FAILOVER.md`
**Nomad jobs**: `services/*.hcl` - service data stored at `/data/services/<service-name>`
## Troubleshooting Hints
- Replication errors with "empty stream": SSH key restricted to `btrfs receive`, can't run other commands
- NFS split-brain protection: nfs-server checks Consul before starting
- Btrfs snapshots: nested snapshots appear as empty dirs in parent snapshots
- Kopia: uses temporary snapshot for consistency, doesn't back up nested subvolumes
## Important Files
- `common/global/backup.nix` - Kopia backup configuration
- `common/nfs-services-server.nix` - NFS server role (check hosts for which imports this)
- `common/nfs-services-standby.nix` - NFS standby role (check hosts for which imports this)
- `flake.nix` - Host definitions, nixpkgs inputs
---
*Auto-generated reference for Claude Code. Keep concise. Update when architecture changes.*

196
README.md Normal file
View File

@@ -0,0 +1,196 @@
# alo-cluster NixOS Configuration
This repository contains the NixOS configuration for a distributed cluster of machines managed as a unified flake.
## Architecture Overview
The configuration uses a **layered profile system** that enables code reuse while maintaining clear separation of concerns:
```
minimal-node # Base system (SSH, users, boot, impermanence)
cluster-node # Cluster services (Consul, GlusterFS, CIFS, encryption)
server-node # Server workloads (future: MySQL, PostgreSQL)
workstation-node # Development tools (Docker, deploy-rs, emulation)
desktop-node # GUI environment (Hyprland, Pipewire, fonts)
```
Each layer extends the previous one, inheriting all configurations. Hosts select a profile level that matches their role.
### Special Node Types
- **compute-node**: Cluster + Nomad worker (container orchestration)
## Directory Structure
```
.
├── flake.nix # Main flake definition with all hosts
├── common/
│ ├── global/ # Global configs applied to all systems
│ │ ├── console.nix # Linux console colors (Solarized Dark)
│ │ ├── locale.nix # Timezone and locale settings
│ │ └── nix.nix # Nix daemon and flake configuration
│ ├── minimal-node.nix # Base layer: SSH, users, boot, impermanence
│ ├── cluster-node.nix # Cluster layer: Consul, GlusterFS, CIFS
│ ├── server-node.nix # Server layer: bare metal services (future)
│ ├── workstation-node.nix # Workstation layer: dev tools
│ ├── desktop-node.nix # Desktop layer: GUI environment
│ ├── compute-node.nix # Nomad worker profile
│ └── [feature modules] # Individual feature configs
├── hosts/
│ ├── c1/ # Compute node 1
│ ├── c2/ # Compute node 2
│ ├── c3/ # Compute node 3
│ ├── alo-cloud-1/ # Cloud VPS
│ ├── chilly/ # Server node
│ ├── zippy/ # Workstation node
│ └── sparky/ # Desktop node
├── home/
│ ├── default.nix # Home-manager entry point
│ ├── profiles/ # Per-profile package sets
│ │ ├── server.nix
│ │ ├── workstation.nix
│ │ └── desktop.nix
│ ├── programs/ # Per-profile program configurations
│ │ ├── server.nix # CLI tools (fish, tmux, git, nixvim)
│ │ ├── workstation.nix # + dev tools
│ │ └── desktop.nix # + Hyprland, wofi
│ └── common/ # Shared home-manager configs
└── services/ # Nomad job definitions (not NixOS)
```
## Profile System
### System Profiles
Profiles are automatically applied based on the `mkHost` call in `flake.nix`:
```nix
# Example: Desktop profile includes all layers up to desktop-node
mkHost "x86_64-linux" "desktop" [
./hosts/sparky
];
```
**Available profiles:**
- `"server"` → minimal + cluster + server
- `"workstation"` → minimal + cluster + server + workstation
- `"desktop"` → minimal + cluster + server + workstation + desktop
### Home-Manager Profiles
Home-manager automatically inherits the same profile as the system, configured in `home/default.nix`:
```nix
imports = [ ./programs/${profile}.nix ];
home.packages = profilePkgs.${profile};
```
This ensures system and user configurations stay synchronized.
## Host Definitions
### Current Hosts
| Host | Profile | Role | Hardware |
|------|---------|------|----------|
| **c1, c2, c3** | compute-node | Nomad workers | Bare metal servers |
| **alo-cloud-1** | minimal | Reverse proxy (Traefik) | Cloud VPS |
| **chilly** | server | Home Assistant in a VM | Bare metal server |
| **zippy** | workstation | Development machine, server | Bare metal server |
| **sparky** | desktop | Desktop environment | Bare metal desktop |
### Adding a New Host
1. Create host directory:
```bash
mkdir -p hosts/newhost
```
2. Create `hosts/newhost/default.nix`:
```nix
{ config, pkgs, ... }:
{
imports = [
../../common/encrypted-btrfs-layout.nix # or your layout
../../common/global
./hardware.nix
];
networking.hostName = "newhost";
# Host-specific configs here
}
```
3. Generate hardware config:
```bash
nixos-generate-config --show-hardware-config > hosts/newhost/hardware.nix
```
4. Add to `flake.nix`:
```nix
newhost = mkHost "x86_64-linux" "workstation" [
./hosts/newhost
];
```
## Deployment
### Using deploy-rs
Deploy to specific host:
```bash
deploy -s '.#sparky'
```
Deploy to all hosts:
```bash
deploy
```
Deploy with detailed logging:
```bash
deploy -s '.#sparky' -- --show-trace
```
### Manual Deployment
```bash
nixos-rebuild switch --flake .#sparky --target-host sparky
```
## Key Features
### Impermanence
All hosts use tmpfs root with selective persistence. Persistent paths configured per-host in `persistence.directories` and `persistence.files`.
### Unattended Encryption
Cluster nodes support automatic unlocking via Tailscale network using `common/unattended-encryption.nix`.
### Cluster Services
- **Consul**: Service discovery and distributed KV store
- **GlusterFS**: Distributed filesystem client
- **CIFS/Samba**: Network file sharing
### Desktop Environment (sparky only)
- **Hyprland**: Wayland compositor with CapsLock→Super remapping
- **wofi**: Application launcher (Super+D)
- **foot**: Terminal emulator (Super+Q)
- **greetd/tuigreet**: Login manager with console option
### Development Tools (workstation/desktop)
- Docker with rootless mode
- deploy-rs for NixOS deployments
- ARM emulation via binfmt
- Full NixVim configuration
## Future Work
- Migrate Nomad services (MySQL, PostgreSQL) to bare NixOS services under `server-node.nix`
- Add monitoring stack (Prometheus, Grafana)
- Document Tailscale key rotation process
- Add automated testing for configuration changes

View File

@@ -1,13 +0,0 @@
{ pkgs, ... }:
{
imports = [
./cifs-client.nix
./consul.nix
./glusterfs-client.nix
./impermanence.nix
./sshd.nix
./user-ppetru.nix
./unattended-encryption.nix
./systemd-boot.nix
];
}

View File

@@ -0,0 +1,41 @@
{ config, pkgs, lib, ... }:
{
# Binary cache proxy using ncps (Nix Cache Proxy Server)
# Transparently caches packages from cache.nixos.org for faster LAN access
#
# How it works:
# - Acts as HTTP proxy for cache.nixos.org
# - Caches packages on first request
# - Subsequent requests served from local disk (LAN speed)
# - No signing needed (packages already signed by upstream)
# - Automatic fallback to cache.nixos.org if this host is down
#
# Setup:
# 1. Deploy this host
# 2. Deploy all other hosts (they're already configured to use this)
# 3. Cache warms up automatically on first use
services.ncps = {
enable = true;
cache = {
hostName = config.networking.hostName;
# NOTE: These paths are hardcoded to /persist (not using config.custom.impermanence.persistPath)
# This is acceptable since this service is only enabled on btrfs-based hosts
dataPath = "/persist/ncps/data";
tempPath = "/persist/ncps/tmp";
databaseURL = "sqlite:/persist/ncps/db/db.sqlite";
maxSize = "300G"; # Adjust based on available disk space
lru.schedule = "0 3 * * *"; # Clean up daily at 3 AM if over maxSize
};
server.addr = "0.0.0.0:8501";
upstream = {
caches = [ "https://cache.nixos.org" ];
publicKeys = [
"cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY="
];
};
};
# Open firewall for LAN access
networking.firewall.allowedTCPPorts = [ 8501 ];
}

View File

@@ -1,7 +1,7 @@
{ pkgs, ... }:
let
# this line prevents hanging on network split
automount_opts = "x-systemd.automount,noauto,x-systemd.idle-timeout=60,x-systemd.device-timeout=5s,x-systemd.mount-timeout=5s";
automount_opts = "x-systemd.automount,noauto,x-systemd.idle-timeout=60,x-systemd.mount-timeout=5s,nobrl";
in
{
environment.systemPackages = [ pkgs.cifs-utils ];
@@ -17,12 +17,12 @@ in
fileSystems."/data/media" = {
device = "//fractal/media";
fsType = "cifs";
options = [ "${automount_opts},credentials=/etc/nixos/smb-secrets" ];
options = [ "uid=1000,${automount_opts},credentials=/etc/nixos/smb-secrets" ];
};
fileSystems."/data/shared" = {
device = "//fractal/shared";
fsType = "cifs";
options = [ "${automount_opts},credentials=/etc/nixos/smb-secrets" ];
options = [ "uid=1000,${automount_opts},credentials=/etc/nixos/smb-secrets" ];
};
}

View File

@@ -1,10 +0,0 @@
{ pkgs, ... }:
{
imports = [
./consul.nix
./impermanence.nix
./sshd.nix
./user-ppetru.nix
./systemd-boot.nix
];
}

24
common/cluster-member.nix Normal file
View File

@@ -0,0 +1,24 @@
{ pkgs, lib, config, ... }:
{
# Cluster node configuration
# Extends minimal-node with cluster-specific services (Consul, GlusterFS, CIFS, NFS)
# Used by: compute nodes (c1, c2, c3)
imports = [
./minimal-node.nix
./unattended-encryption.nix
./cifs-client.nix
./consul.nix
./nfs-services-client.nix # New: NFS client for /data/services
];
options.networking.cluster.primaryInterface = lib.mkOption {
type = lib.types.str;
default = "eno1";
description = "Primary network interface for cluster communication (Consul, NFS, etc.)";
};
config = {
# Wait for primary interface to be routable before considering network online
systemd.network.wait-online.extraArgs = [ "--interface=${config.networking.cluster.primaryInterface}:routable" ];
};
}

View File

@@ -1,9 +0,0 @@
{ pkgs, ... }:
{
imports = [
./base-node.nix
./glusterfs.nix
./nomad.nix
./syncthing-data.nix
];
}

View File

@@ -1,22 +1,24 @@
{ pkgs, config, ... }:
{ pkgs, config, lib, ... }:
let
servers = [
"c1"
"c2"
"c3"
];
server_enabled = builtins.elem config.networking.hostName servers;
in
{
options.clusterRole.consulServer = lib.mkEnableOption "Consul server mode";
config = {
services.consul = {
enable = true;
webUi = true;
interface.advertise = "eno1";
interface.advertise = config.networking.cluster.primaryInterface;
extraConfig = {
client_addr = "0.0.0.0";
datacenter = "alo";
server = server_enabled;
bootstrap_expect = if server_enabled then (builtins.length servers + 2) / 2 else null;
server = config.clusterRole.consulServer;
bootstrap_expect = if config.clusterRole.consulServer then (builtins.length servers + 2) / 2 else null;
retry_join = builtins.filter (elem: elem != config.networking.hostName) servers;
telemetry = {
prometheus_retention_time = "24h";
@@ -25,7 +27,7 @@ in
};
};
environment.persistence."/persist".directories = [ "/var/lib/consul" ];
environment.persistence.${config.custom.impermanence.persistPath}.directories = [ "/var/lib/consul" ];
networking.firewall = {
allowedTCPPorts = [
@@ -41,4 +43,5 @@ in
8302
];
};
};
}

View File

@@ -1,11 +0,0 @@
{ lib, ... }:
{
imports = [
./impermanence.nix # TODO: find a way to avoid needing this here
];
boot.isContainer = true;
custom.impermanence.enable = false;
custom.tailscale.enable = false;
networking.useDHCP = lib.mkForce false;
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 MiB

View File

@@ -0,0 +1,79 @@
# ABOUTME: NixOS desktop environment module for Hyprland
# ABOUTME: Configures greetd, audio, bluetooth, fonts, and system services
{ config, pkgs, lib, ... }:
{
imports = [
../workstation-node.nix
];
# Force NetworkManager off - we use useDHCP globally
networking.networkmanager.enable = lib.mkForce false;
# Hyprland window manager
programs.hyprland = {
enable = true;
xwayland.enable = true;
};
# greetd display manager with tuigreet
services.greetd = {
enable = true;
settings = {
default_session = {
command = "${pkgs.tuigreet}/bin/tuigreet --time --cmd Hyprland";
user = "greeter";
};
};
};
# Essential desktop services
services.dbus.enable = true;
# polkit for privilege escalation
security.polkit.enable = true;
# DNS resolution
services.resolved.enable = true;
# Bluetooth support
hardware.bluetooth = {
enable = true;
powerOnBoot = true;
};
services.blueman.enable = true;
# Audio with PipeWire
security.rtkit.enable = true;
services.pipewire = {
enable = true;
alsa.enable = true;
alsa.support32Bit = true;
pulse.enable = true;
jack.enable = true;
};
# direnv support
programs.direnv.enable = true;
# Fonts
fonts.packages = with pkgs; [
noto-fonts
noto-fonts-cjk-sans
noto-fonts-color-emoji
liberation_ttf
fira-code
fira-code-symbols
nerd-fonts.caskaydia-mono
];
# Environment variables for Wayland
environment.sessionVariables = {
NIXOS_OZONE_WL = "1";
};
# Additional desktop packages
environment.systemPackages = with pkgs; [
prusa-slicer
];
}

18
common/ethereum.nix Normal file
View File

@@ -0,0 +1,18 @@
{ config, pkgs, ... }:
{
sops.secrets.lighthouse_jwt = {
sopsFile = ./../secrets/${config.networking.hostName}.yaml;
};
services.ethereum.lighthouse-beacon.mainnet = {
enable = true;
#package = pkgs.unstable.lighthouse;
args = {
execution-endpoint = "http://eth1:8551";
execution-jwt = config.sops.secrets.lighthouse_jwt.path;
checkpoint-sync-url = "https://beaconstate.info";
};
};
environment.persistence.${config.custom.impermanence.persistPath}.directories = [
"/var/lib/private/lighthouse-mainnet"
];
}

View File

@@ -1,16 +1,69 @@
{ pkgs, ... }:
{
environment.systemPackages = [ pkgs.kopia ];
{ pkgs, config, ... }:
let
kopiaPkg = pkgs.unstable.kopia;
kopia = "${kopiaPkg}/bin/kopia";
btrfsPkg = pkgs.btrfs-progs;
btrfs = "${btrfsPkg}/bin/btrfs";
snapshotBackup = pkgs.writeScript "kopia-snapshot-backup" (builtins.readFile ./kopia-snapshot-backup.sh);
backupScript = pkgs.writeShellScript "backup-persist" ''
target_path="${config.custom.impermanence.persistPath}"
KOPIA_CHECK_FOR_UPDATES=false
# systemd = {
# services = {
# "backup-persist" = {
# };
# };
#
# timers = {
# "backup-persist" = {
# };
# };
# };
${kopia} repository connect server \
--url https://fractal:51515/ \
--server-cert-fingerprint=a79fce88b1d53ab9e58b8aab20fd8c82332492d501f3ce3efc5e2bb416140be5 \
-p "$(cat ${config.sops.secrets.kopia.path})" \
|| exit 1
# Check if target_path is on btrfs filesystem
fs_type=$(stat -f -c %T "$target_path")
if [ "$fs_type" = "btrfs" ]; then
# On btrfs: use snapshot for consistency
snapshot_path="$target_path/kopia-backup-snapshot"
[ -e "$snapshot_path" ] && ${btrfs} subvolume delete "$snapshot_path"
${btrfs} subvolume snapshot -r "$target_path" "$snapshot_path"
# --no-send-snapshot-path due to https://github.com/kopia/kopia/issues/4402
# Exclude btrfs replication snapshots (they appear as empty dirs in the snapshot anyway)
${kopia} snapshot create --no-send-snapshot-report --override-source "$target_path" \
--ignore "services@*" \
--ignore "services-standby/services@*" \
-- "$snapshot_path"
${btrfs} subvolume delete "$snapshot_path"
else
# On non-btrfs (e.g., ext4): backup directly without snapshot
${kopia} snapshot create --no-send-snapshot-report --override-source "$target_path" \
-- "$target_path"
fi
${kopia} repository disconnect
'';
in
{
environment.systemPackages = [
btrfsPkg
kopiaPkg
];
systemd = {
services."backup-persist" = {
description = "Backup persistent data with Kopia";
serviceConfig = {
Type = "oneshot";
User = "root";
ExecStart = "${backupScript}";
};
};
timers."backup-persist" = {
description = "Timer for Kopia persistent data backup";
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "hourly";
RandomizedDelaySec = 300;
};
};
};
}

44
common/global/console.nix Normal file
View File

@@ -0,0 +1,44 @@
{
# Configure Linux console (VT/framebuffer) colors to use Solarized Dark theme
# This affects the text-mode console accessed via Ctrl+Alt+F1-F6 or when booting without graphics
#
# Solarized Dark color scheme by Ethan Schoonover
# https://ethanschoonover.com/solarized/
#
# Color mapping:
# 0 = black -> base02 (#073642)
# 1 = red -> red (#dc322f)
# 2 = green -> green (#859900)
# 3 = yellow -> yellow (#b58900)
# 4 = blue -> blue (#268bd2)
# 5 = magenta -> magenta (#d33682)
# 6 = cyan -> cyan (#2aa198)
# 7 = white -> base2 (#eee8d5)
# 8 = br_black -> base03 (#002b36) - background
# 9 = br_red -> orange (#cb4b16)
# 10 = br_green -> base01 (#586e75)
# 11 = br_yellow -> base00 (#657b83)
# 12 = br_blue -> base0 (#839496)
# 13 = br_magenta -> violet (#6c71c4)
# 14 = br_cyan -> base1 (#93a1a1)
# 15 = br_white -> base3 (#fdf6e3)
console.colors = [
"073642" # 0: black (base02)
"dc322f" # 1: red
"859900" # 2: green
"b58900" # 3: yellow
"268bd2" # 4: blue
"d33682" # 5: magenta
"2aa198" # 6: cyan
"eee8d5" # 7: white (base2)
"002b36" # 8: bright black (base03 - Solarized Dark background)
"cb4b16" # 9: bright red (orange)
"586e75" # 10: bright green (base01)
"657b83" # 11: bright yellow (base00)
"839496" # 12: bright blue (base0)
"6c71c4" # 13: bright magenta (violet)
"93a1a1" # 14: bright cyan (base1)
"fdf6e3" # 15: bright white (base3)
];
}

View File

@@ -2,8 +2,10 @@
{
imports = [
./backup.nix
./console.nix
./cpufreq.nix
./flakes.nix
./impermanence-options.nix
./kernel.nix
./locale.nix
./network.nix

View File

@@ -0,0 +1,14 @@
{
lib,
...
}:
{
# Define impermanence options that need to be available to all modules
# The actual impermanence implementation is in common/impermanence.nix or common/impermanence-tmpfs.nix
options.custom.impermanence.persistPath = lib.mkOption {
type = lib.types.str;
default = "/persist";
description = "Path where persistent data is stored (e.g., /persist for btrfs, /nix/persist for tmpfs)";
};
}

View File

@@ -1,3 +1,4 @@
{ lib, config, ... }:
{
networking = {
useDHCP = true;
@@ -9,7 +10,7 @@
'';
};
environment.persistence."/persist" = {
environment.persistence.${config.custom.impermanence.persistPath} = {
directories = [ "/var/db/dhcpcd" ];
};
}

View File

@@ -1,11 +1,38 @@
{
nix.settings.trusted-users = [
nix.settings = {
trusted-users = [
"root"
"@wheel"
];
# Binary cache configuration
# c3 runs ncps (Nix Cache Proxy Server) that caches cache.nixos.org
# Falls back to cache.nixos.org if c3 is unreachable
substituters = [
"http://c3.mule-stork.ts.net:8501" # Local ncps cache proxy on c3
"https://cache.nixos.org"
];
trusted-public-keys = [
"cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY="
"c3:sI3l1RN80xdehzXLA8u2P6352B0SyRPs2XiYy/YWYro="
];
# Performance tuning
max-jobs = "auto"; # Use all cores for parallel builds
cores = 0; # Each build can use all cores
max-substitution-jobs = 16; # Faster fetching from caches
http-connections = 25; # More parallel downloads
download-attempts = 3; # Retry failed downloads
};
nix.gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 30d";
};
# TODO: this should be a secret, maybe
nix.extraOptions = ''
access-tokens = github.com=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
'';
}

View File

@@ -3,6 +3,7 @@
environment.systemPackages = with pkgs; [
age
file
killall
lm_sensors # TODO: this shouldn't be installed on cloud nodes
nodejs_20 # TODO: this is for one job on nomad, it should just be a dependency there
neovim

View File

@@ -1,10 +1,15 @@
{ config, ... }:
{
sops = {
defaultSopsFile = ./../../secrets/secrets.yaml;
# sometimes the impermanence bind mount is stopped when sops needs these
age.sshKeyPaths = [
"/persist/etc/ssh/ssh_host_ed25519_key"
"/persist/etc/ssh/ssh_host_rsa_key"
"${config.custom.impermanence.persistPath}/etc/ssh/ssh_host_ed25519_key"
];
defaultSopsFile = ./../../secrets/common.yaml;
secrets = {
kopia = {
sopsFile = ./../../secrets/${config.networking.hostName}.yaml;
};
};
};
}

View File

@@ -22,6 +22,6 @@ in
config = mkIf cfg.enable {
services.tailscaleAutoconnect.enable = true;
services.tailscale.package = pkgs.unstable.tailscale;
environment.persistence."/persist".directories = [ "/var/lib/tailscale" ];
environment.persistence.${config.custom.impermanence.persistPath}.directories = [ "/var/lib/tailscale" ];
};
}

View File

@@ -1,13 +0,0 @@
{ pkgs, ... }:
{
environment.systemPackages = [ pkgs.glusterfs ];
fileSystems."/data/compute" = {
device = "192.168.1.71:/compute";
fsType = "glusterfs";
options = [
"backup-volfile-servers=192.168.1.72:192.168.1.73"
"_netdev"
];
};
}

View File

@@ -1,24 +0,0 @@
{
pkgs,
config,
lib,
...
}:
{
services.glusterfs = {
enable = true;
};
environment.persistence."/persist".directories = [ "/var/lib/glusterd" ];
# TODO: each volume needs its own port starting at 49152
networking.firewall.allowedTCPPorts = [
24007
24008
24009
49152
49153
49154
49155
];
}

View File

@@ -0,0 +1,30 @@
{
lib,
config,
...
}:
{
# Common impermanence configuration shared by both btrfs and tmpfs variants
# This module should be imported by impermanence.nix or impermanence-tmpfs.nix
# The option custom.impermanence.persistPath is defined in common/global/impermanence-options.nix
environment.persistence.${config.custom.impermanence.persistPath} = {
directories = [
"/var/lib/nixos"
"/home"
];
files = [
"/etc/machine-id"
"/etc/ssh/ssh_host_ed25519_key"
"/etc/ssh/ssh_host_ed25519_key.pub"
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_rsa_key.pub"
];
};
users.mutableUsers = false;
security.sudo.extraConfig = ''
Defaults lecture = never
'';
}

View File

@@ -0,0 +1,30 @@
{
lib,
config,
...
}:
{
# Impermanence configuration for tmpfs root filesystem
# Used for systems with tmpfs root (e.g., Raspberry Pi with SD card)
# Root is in-memory and wiped on every boot
# Persistent data is stored in /nix/persist (directory on the /nix partition)
# Import common impermanence configuration
imports = [ ./impermanence-common.nix ];
config = {
# Use /nix/persist for tmpfs-based impermanence
custom.impermanence.persistPath = "/nix/persist";
# tmpfs root filesystem
fileSystems."/" = {
device = "none";
fsType = "tmpfs";
options = [
"defaults"
"size=2G"
"mode=755"
];
};
};
}

View File

@@ -1,6 +1,5 @@
{
pkgs,
inputs,
lib,
config,
...
@@ -9,28 +8,22 @@ let
cfg = config.custom.impermanence;
in
{
# Import common impermanence configuration
imports = [ ./impermanence-common.nix ];
options.custom.impermanence = {
enable = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Enable impermanent root fs";
description = "Enable impermanent root fs with btrfs subvolume rollback";
};
};
config = lib.mkIf cfg.enable {
environment.persistence = {
"/persist" = {
directories = [ "/var/lib/nixos" ];
files = [
"/etc/machine-id"
"/etc/ssh/ssh_host_ed25519_key"
"/etc/ssh/ssh_host_ed25519_key.pub"
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_rsa_key.pub"
];
};
};
# Use /persist for btrfs-based impermanence
custom.impermanence.persistPath = "/persist";
# Btrfs-specific filesystem options
fileSystems."/".options = [
"compress=zstd"
"noatime"
@@ -50,21 +43,11 @@ in
];
fileSystems."/var/log".neededForBoot = true;
users.mutableUsers = false;
# rollback results in sudo lectures after each reboot
security.sudo.extraConfig = ''
Defaults lecture = never
'';
# needed for allowOther in the home-manager impermanence config
programs.fuse.userAllowOther = true;
# reset / at each boot
# Btrfs subvolume rollback at each boot
# Note `lib.mkBefore` is used instead of `lib.mkAfter` here.
boot.initrd.postDeviceCommands = pkgs.lib.mkBefore ''
mkdir /mnt
mount /dev/mapper/luksroot /mnt
mount ${config.fileSystems."/".device} /mnt
if [[ -e /mnt/root ]]; then
mkdir -p /mnt/old_roots
timestamp=$(date --date="@$(stat -c %Y /mnt/root)" "+%Y-%m-%-d_%H:%M:%S")

13
common/minimal-node.nix Normal file
View File

@@ -0,0 +1,13 @@
{ pkgs, ... }:
{
# Minimal base configuration for all NixOS systems
# Provides: SSH access, user management, boot, impermanence
# Note: unattended-encryption is NOT included by default - add it explicitly where needed
imports = [
./impermanence.nix
./resource-limits.nix
./sshd.nix
./user-ppetru.nix
./systemd-boot.nix
];
}

View File

@@ -0,0 +1,32 @@
{
config,
lib,
pkgs,
...
}:
{
options.services.netconsoleReceiver = {
enable = lib.mkEnableOption "netconsole UDP receiver";
port = lib.mkOption {
type = lib.types.port;
default = 6666;
description = "UDP port to listen on for netconsole messages";
};
};
config = lib.mkIf config.services.netconsoleReceiver.enable {
systemd.services.netconsole-receiver = {
description = "Netconsole UDP receiver";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
ExecStart = "${pkgs.socat}/bin/socat -u UDP-LISTEN:${toString config.services.netconsoleReceiver.port},fork STDOUT";
StandardOutput = "journal";
StandardError = "journal";
SyslogIdentifier = "netconsole";
Restart = "always";
RestartSec = "5s";
};
};
};
}

View File

@@ -0,0 +1,29 @@
{ pkgs, ... }:
{
# NFS client for /data/services
# Mounts from data-services.service.consul (Consul DNS for automatic failover)
# The NFS server registers itself in Consul, so this will automatically
# point to whichever host is currently running the NFS server
#
# Uses persistent mount (not automount) with nofail to prevent blocking boot.
# The mount is established at boot time and persists - no auto-unmount.
# This prevents issues with Docker bind mounts seeing empty automount stubs.
imports = [
./wait-for-dns-ready.nix
];
fileSystems."/data/services" = {
device = "data-services.service.consul:/persist/services";
fsType = "nfs";
options = [
"nofail" # Don't block boot if mount fails
"x-systemd.mount-timeout=30s" # Timeout for mount attempts
"x-systemd.after=wait-for-dns-ready.service" # Wait for DNS to actually work
"_netdev" # Network filesystem (wait for network)
];
};
# Ensure NFS client packages are available
environment.systemPackages = [ pkgs.nfs-utils ];
}

View File

@@ -0,0 +1,201 @@
{ config, lib, pkgs, ... }:
let
cfg = config.nfsServicesServer;
in
{
options.nfsServicesServer = {
enable = lib.mkEnableOption "NFS services server" // { default = true; };
standbys = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [];
description = ''
List of standby hostnames to replicate to (e.g. ["c1"]).
Requires one-time setup on the NFS server:
sudo mkdir -p /persist/root/.ssh
sudo ssh-keygen -t ed25519 -f /persist/root/.ssh/btrfs-replication -N "" -C "root@$(hostname)-replication"
Then add the public key to each standby's nfsServicesStandby.replicationKeys option.
'';
};
};
config = lib.mkIf cfg.enable {
# Persist root SSH directory for replication key
environment.persistence.${config.custom.impermanence.persistPath} = {
directories = [
"/root/.ssh"
];
};
# Bind mount /persist/services to /data/services for local access
# This makes the path consistent with NFS clients
# Use mkForce to override the NFS client mount from cluster-node.nix
fileSystems."/data/services" = lib.mkForce {
device = "/persist/services";
fsType = "none";
options = [ "bind" ];
};
# Nomad node metadata: mark this as the primary storage node
# Jobs can constrain to ${meta.storage_role} = "primary"
services.nomad.settings.client.meta = {
storage_role = "primary";
};
# NFS server configuration
services.nfs.server = {
enable = true;
exports = ''
/persist/services 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
'';
};
# Consul service registration for NFS
services.consul.extraConfig.services = [{
name = "data-services";
port = 2049;
checks = [{
tcp = "localhost:2049";
interval = "30s";
}];
}];
# Firewall for NFS
networking.firewall.allowedTCPPorts = [ 2049 111 20048 ];
networking.firewall.allowedUDPPorts = [ 2049 111 20048 ];
# systemd services: NFS server split-brain check + replication services
systemd.services = lib.mkMerge ([
# Safety check: prevent split-brain by ensuring no other NFS server is active
{
nfs-server = {
preStart = ''
# Wait for Consul to be available
for i in {1..30}; do
if ${pkgs.netcat}/bin/nc -z localhost 8600; then
break
fi
echo "Waiting for Consul DNS... ($i/30)"
sleep 1
done
# Check if another NFS server is already registered in Consul
CURRENT_SERVER=$(${pkgs.dnsutils}/bin/dig +short @localhost -p 8600 data-services.service.consul | head -1 || true)
MY_IP=$(${pkgs.iproute2}/bin/ip -4 addr show | ${pkgs.gnugrep}/bin/grep -oP '(?<=inet\s)\d+(\.\d+){3}' | ${pkgs.gnugrep}/bin/grep -v '^127\.' | head -1)
if [ -n "$CURRENT_SERVER" ] && [ "$CURRENT_SERVER" != "$MY_IP" ]; then
echo "ERROR: Another NFS server is already active at $CURRENT_SERVER"
echo "This host ($MY_IP) is configured as NFS server but should be standby."
echo "To fix:"
echo " 1. If this is intentional (failback), first demote the other server"
echo " 2. Update this host's config to use nfs-services-standby.nix instead"
echo " 3. Sync data from active server before promoting this host"
exit 1
fi
echo "NFS server startup check passed (no other active server found)"
'';
};
}
] ++ (lib.forEach cfg.standbys (standby: {
"replicate-services-to-${standby}" = {
description = "Replicate /persist/services to ${standby}";
path = [ pkgs.btrfs-progs pkgs.openssh pkgs.coreutils pkgs.findutils pkgs.gnugrep pkgs.curl ];
script = ''
set -euo pipefail
START_TIME=$(date +%s)
REPLICATION_SUCCESS=0
SSH_KEY="/persist/root/.ssh/btrfs-replication"
if [ ! -f "$SSH_KEY" ]; then
echo "ERROR: SSH key not found at $SSH_KEY"
echo "Run: sudo ssh-keygen -t ed25519 -f $SSH_KEY -N \"\" -C \"root@$(hostname)-replication\""
exit 1
fi
SNAPSHOT_NAME="services@$(date +%Y%m%d-%H%M%S)"
SNAPSHOT_PATH="/persist/$SNAPSHOT_NAME"
# Create readonly snapshot
btrfs subvolume snapshot -r /persist/services "$SNAPSHOT_PATH"
# Find previous snapshot on sender (sort by name since readonly snapshots have same mtime)
# Use -d to list directories only, not their contents
PREV_LOCAL=$(ls -1d /persist/services@* 2>/dev/null | grep -v "^$SNAPSHOT_PATH$" | sort -r | head -1 || true)
# Try incremental send if we have a parent, fall back to full send if it fails
if [ -n "$PREV_LOCAL" ]; then
echo "Attempting incremental send from $(basename $PREV_LOCAL) to ${standby}"
# Try incremental send, if it fails (e.g., parent missing on receiver), fall back to full
# Use -c to help with broken Received UUID chains
if btrfs send -p "$PREV_LOCAL" -c "$PREV_LOCAL" "$SNAPSHOT_PATH" | \
ssh -i "$SSH_KEY" -o StrictHostKeyChecking=accept-new root@${standby} \
"btrfs receive /persist/services-standby"; then
echo "Incremental send completed successfully"
REPLICATION_SUCCESS=1
else
echo "Incremental send failed (likely missing parent on receiver), falling back to full send"
# Plain full send without clone source (receiver may have no snapshots)
btrfs send "$SNAPSHOT_PATH" | \
ssh -i "$SSH_KEY" -o StrictHostKeyChecking=accept-new root@${standby} \
"btrfs receive /persist/services-standby"
REPLICATION_SUCCESS=1
fi
else
# First snapshot, do full send
echo "Full send to ${standby} (first snapshot)"
btrfs send "$SNAPSHOT_PATH" | \
ssh -i "$SSH_KEY" -o StrictHostKeyChecking=accept-new root@${standby} \
"btrfs receive /persist/services-standby"
REPLICATION_SUCCESS=1
fi
# Cleanup old snapshots on sender (keep last 10 snapshots, sorted by name/timestamp)
ls -1d /persist/services@* 2>/dev/null | sort | head -n -10 | xargs -r btrfs subvolume delete
# Calculate metrics
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))
SNAPSHOT_COUNT=$(ls -1d /persist/services@* 2>/dev/null | wc -l)
# Push metrics to Prometheus pushgateway
cat <<METRICS | curl -s --data-binary @- http://pushgateway.service.consul:9091/metrics/job/nfs_replication/instance/${standby} || true
# TYPE nfs_replication_last_success_timestamp gauge
nfs_replication_last_success_timestamp $END_TIME
# TYPE nfs_replication_duration_seconds gauge
nfs_replication_duration_seconds $DURATION
# TYPE nfs_replication_snapshot_count gauge
nfs_replication_snapshot_count $SNAPSHOT_COUNT
# TYPE nfs_replication_success gauge
nfs_replication_success $REPLICATION_SUCCESS
METRICS
'';
serviceConfig = {
Type = "oneshot";
User = "root";
};
};
}))
);
systemd.timers = lib.mkMerge (
lib.forEach cfg.standbys (standby: {
"replicate-services-to-${standby}" = {
description = "Timer for replicating /persist/services to ${standby}";
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "*:0/5"; # Every 5 minutes
Persistent = true;
};
};
})
);
};
}

View File

@@ -0,0 +1,79 @@
{ config, lib, pkgs, ... }:
let
cfg = config.nfsServicesStandby;
in
{
options.nfsServicesStandby = {
enable = lib.mkEnableOption "NFS services standby" // { default = true; };
replicationKeys = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [];
description = ''
SSH public keys authorized to replicate btrfs snapshots to this standby.
These keys are restricted to only run 'btrfs receive /persist/services-standby'.
Get the public key from the NFS server:
ssh <nfs-server> sudo cat /persist/root/.ssh/btrfs-replication.pub
'';
};
};
config = lib.mkIf cfg.enable {
# Allow root SSH login for replication (restricted by command= in authorized_keys)
# This is configured in common/sshd.nix
# Restricted SSH keys for btrfs replication
users.users.root.openssh.authorizedKeys.keys =
map (key: ''command="btrfs receive /persist/services-standby",restrict ${key}'') cfg.replicationKeys;
# Mount point for services-standby subvolume
# This is just declarative documentation - the subvolume must be created manually once:
# sudo btrfs subvolume create /persist/services-standby
# After that, it will persist across reboots (it's under /persist)
fileSystems."/persist/services-standby" = {
device = "/persist/services-standby";
fsType = "none";
options = [ "bind" ];
noCheck = true;
};
# Cleanup old snapshots on standby (keep last 10 snapshots)
systemd.services.cleanup-services-standby-snapshots = {
description = "Cleanup old btrfs snapshots in services-standby";
path = [ pkgs.btrfs-progs pkgs.findutils pkgs.coreutils pkgs.curl ];
script = ''
set -euo pipefail
# Cleanup old snapshots on standby (keep last 10 snapshots, sorted by name/timestamp)
ls -1d /persist/services-standby/services@* 2>/dev/null | sort | head -n -10 | xargs -r btrfs subvolume delete || true
# Calculate metrics
CLEANUP_TIME=$(date +%s)
SNAPSHOT_COUNT=$(ls -1d /persist/services-standby/services@* 2>/dev/null | wc -l)
# Push metrics to Prometheus pushgateway
cat <<METRICS | curl -s --data-binary @- http://pushgateway.service.consul:9091/metrics/job/nfs_standby_cleanup/instance/$(hostname) || true
# TYPE nfs_standby_snapshot_count gauge
nfs_standby_snapshot_count $SNAPSHOT_COUNT
# TYPE nfs_standby_cleanup_last_run_timestamp gauge
nfs_standby_cleanup_last_run_timestamp $CLEANUP_TIME
METRICS
'';
serviceConfig = {
Type = "oneshot";
User = "root";
};
};
systemd.timers.cleanup-services-standby-snapshots = {
description = "Timer for cleaning up old snapshots on standby";
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "hourly";
Persistent = true;
};
};
};
}

9
common/nomad-server.nix Normal file
View File

@@ -0,0 +1,9 @@
{ ... }:
{
# Enable server mode for both Consul and Nomad
# Used by: c1, c2, c3 (quorum members)
clusterRole = {
consulServer = true;
nomadServer = true;
};
}

9
common/nomad-worker.nix Normal file
View File

@@ -0,0 +1,9 @@
{ ... }:
{
# Enable Nomad client to run workloads
# Includes: Nomad client, Docker plugin, host volumes, NFS mount dependencies
# Used by: c1, c2, c3, zippy (all nodes that run Nomad jobs)
imports = [
./nomad.nix
];
}

View File

@@ -1,14 +1,16 @@
# inspiration: https://github.com/astro/skyflake/blob/main/nixos-modules/nomad.nix
{ pkgs, config, ... }:
{ pkgs, config, lib, ... }:
let
servers = [
"c1"
"c2"
"c3"
];
server_enabled = builtins.elem config.networking.hostName servers;
in
{
options.clusterRole.nomadServer = lib.mkEnableOption "Nomad server mode";
config = {
services.nomad = {
enable = true;
# true breaks at least CSI volumes
@@ -26,19 +28,23 @@ in
cidr = "100.64.0.0/10";
};
host_volume = {
code = {
path = "/data/compute/code";
read_only = true;
services = {
path = "/data/services";
read_only = false;
};
nix-store = {
path = "/nix/store";
read_only = true;
};
sw = {
path = "/run/current-system/sw";
read_only = true;
};
};
};
server = {
enabled = server_enabled;
enabled = config.clusterRole.nomadServer;
bootstrap_expect = (builtins.length servers + 2) / 2;
server_join.retry_join = servers;
};
@@ -55,7 +61,76 @@ in
extraSettingsPaths = [ "/etc/nomad-alo.json" ];
};
systemd.services.nomad.wants = [ "network-online.target" ];
# NFS mount dependency configuration for Nomad:
#
# Problem: Docker bind mounts need the real NFS mount, not an empty stub.
# If Nomad starts before NFS is mounted, containers get empty directories.
#
# Solution: Use soft dependencies (wants/after) with health-checking recovery.
# - wants: Nomad wants the mount, but won't be killed if it goes away
# - after: Nomad waits for mount to be attempted before starting
# - ExecStartPre with findmnt: Blocks Nomad start until mount is actually active
#
# This prevents Docker race conditions while allowing:
# - Boot to proceed if NFS unavailable (Nomad fails to start, systemd retries)
# - Nomad to keep running if NFS temporarily fails (containers may error)
# - Recovery service to auto-restart Nomad when NFS comes back or becomes stale
#
# Note: Mount uses Consul DNS which resolves at mount time. If NFS server
# moves to different IP, mount becomes stale and needs remount.
# The recovery service handles this by detecting stale mounts and restarting Nomad.
systemd.services.nomad = {
wants = [ "network-online.target" "data-services.mount" ];
after = [ "data-services.mount" ];
serviceConfig.ExecStartPre = "${pkgs.util-linux}/bin/findmnt --mountpoint /data/services";
};
# Recovery service: automatically restart Nomad when NFS mount needs attention
# This handles scenarios where:
# - NFS server was down during boot (mount failed, Nomad hit start-limit)
# - NFS server failed over to different host with new IP (mount went stale)
# - Network outage temporarily broke the mount
#
# The timer runs every 30s and checks:
# 1. Is mount healthy (exists and accessible)?
# 2. If mount is stale/inaccessible → restart Nomad (triggers remount)
# 3. If mount is healthy but Nomad failed → restart Nomad (normal recovery)
systemd.services.nomad-mount-watcher = {
description = "Restart Nomad when NFS mount needs attention";
serviceConfig = {
Type = "oneshot";
ExecStart = pkgs.writeShellScript "nomad-mount-watcher" ''
# Check if mount point exists
if ! ${pkgs.util-linux}/bin/findmnt --mountpoint /data/services >/dev/null 2>&1; then
exit 0 # Mount not present, nothing to do
fi
# Check if mount is actually accessible (not stale)
# Use timeout to avoid hanging on stale NFS mounts
if ! ${pkgs.coreutils}/bin/timeout 5s ${pkgs.coreutils}/bin/stat /data/services >/dev/null 2>&1; then
echo "NFS mount is stale or inaccessible. Restarting Nomad to trigger remount..."
${pkgs.systemd}/bin/systemctl restart nomad.service
exit 0
fi
# Mount is healthy - check if Nomad needs recovery
if ${pkgs.systemd}/bin/systemctl is-failed nomad.service >/dev/null 2>&1; then
echo "NFS mount is healthy but Nomad is failed. Restarting Nomad..."
${pkgs.systemd}/bin/systemctl restart nomad.service
fi
'';
};
};
systemd.timers.nomad-mount-watcher = {
description = "Timer for Nomad mount watcher";
wantedBy = [ "timers.target" ];
timerConfig = {
OnBootSec = "1min"; # First run 1min after boot
OnUnitActiveSec = "30s"; # Then every 30s
Unit = "nomad-mount-watcher.service";
};
};
environment.etc."nomad-alo.json".text = builtins.toJSON {
plugin.docker.config = {
@@ -75,7 +150,7 @@ in
plugin.raw_exec.config.enabled = true;
};
environment.persistence."/persist".directories = [
environment.persistence.${config.custom.impermanence.persistPath}.directories = [
"/var/lib/docker"
"/var/lib/nomad"
];
@@ -88,7 +163,7 @@ in
networking.firewall = {
allowedTCPPorts =
if server_enabled then
if config.clusterRole.nomadServer then
[
4646
4647
@@ -96,6 +171,7 @@ in
]
else
[ 4646 ];
allowedUDPPorts = if server_enabled then [ 4648 ] else [ ];
allowedUDPPorts = if config.clusterRole.nomadServer then [ 4648 ] else [ ];
};
};
}

View File

@@ -0,0 +1,44 @@
{ ... }:
{
# Resource limits for user sessions to prevent system wedging
#
# Modern systemd/cgroups v2 approach to resource control (replaces ulimits).
# Limits apply to all user sessions (SSH, GUI, etc.) but NOT to system services.
#
# Rationale:
# - Prevents runaway user processes (nix builds, compiles, etc.) from consuming
# all resources and making the system unresponsive
# - System services (Nomad jobs, Consul, NFS, etc.) run outside user.slice and
# are unaffected by these limits
# - Ensures SSH access remains responsive even under heavy load
#
# CPU: Uses CPUWeight (not CPUQuota) so user sessions can use 100% when idle,
# but system services get priority (1.25x) during contention
# Memory: Soft limit at 90% (triggers pressure/reclaim), hard limit at 95%
# Gives 5% warning buffer before OOM kills
systemd.slices.user = {
sliceConfig = {
# CPU weight: 80 vs default 100 for system services
# When idle: user sessions use all available CPU
# Under contention: system services get 1.25x CPU share
CPUWeight = "80";
# Memory soft limit: triggers reclaim and memory pressure
# User will notice slowdown but processes keep running
MemoryHigh = "90%";
# Memory hard limit: OOM killer targets user.slice
# 5% buffer between MemoryHigh and MemoryMax provides warning
MemoryMax = "95%";
# Limit number of tasks (processes/threads)
# Prevents fork bombs while still allowing nix builds
TasksMax = "4096";
# Lower I/O priority slightly
# System services get preference during I/O contention
IOWeight = "90";
};
};
}

View File

@@ -3,8 +3,7 @@
enable = true;
allowSFTP = true;
settings = {
PasswordAuthentication = false;
KbdInteractiveAuthentication = false;
PermitRootLogin = "prohibit-password"; # Allow root login with SSH keys only
};
};

View File

@@ -1,53 +0,0 @@
{
# TODO: when deploying this to a new machine for the first time, first
# comment this out to get /data/sync created with the right owner and
# permissions. then, do it again with persistence enabled.
# This could list the owner user but I'm not sure if it's already created at
# the time impermanence setup runs.
# Note: chown syncthing:syncthing /data/sync && chmod 700 /data/sync also seems to work
environment.persistence."/persist".directories = [ "/data/sync" ];
services.syncthing = {
enable = true;
dataDir = "/data/sync";
openDefaultPorts = true;
#guiAddress = "0.0.0.0:8384";
overrideDevices = true;
overrideFolders = true;
settings = {
devices = {
"c1" = {
id = "53JGRHQ-VGBYIGH-7IT6Z5S-3IMRY2I-LJZAE3B-QUDH3QF-4F4QKVC-VBWPJQ4";
};
"c2" = {
id = "Z3D476N-PUV6WAD-DSJWVBO-TWEOD4I-KDDMNRB-QEBOP6T-BYPGYTX-RAAYGAW";
};
"c3" = {
id = "D3C3YII-A3QGUNF-LHOGZNX-GJ4ZF3X-VVLMNY5-BBKF3BO-KNHKJMD-EA5QYQJ";
};
"zippy" = {
id = "WXDYZWN-JG2OBQH-CC42RMM-LPJGTS6-Y2BV37J-TYSLHL4-VHGYL5M-URI42QJ";
};
};
folders = {
"wordpress" = {
path = "/data/sync/wordpress";
devices = [
"c1"
"c2"
"c3"
"zippy"
];
ignorePerms = false;
versioning = {
type = "staggered";
params = {
cleanInterval = "3600";
maxAge = "15768000";
};
};
};
};
};
};
}

View File

@@ -1,7 +1,9 @@
{ pkgs, lib, ... }:
{
boot.loader.systemd-boot = {
enable = true;
configurationLimit = 5;
memtest86.enable = lib.mkIf (pkgs.stdenv.hostPlatform.system == "x86_64-linux") true;
};
boot.loader.efi.canTouchEfiVariables = true;
}

View File

@@ -15,8 +15,9 @@
openssh.authorizedKeys.keys = [
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCdZ9dHN+DamoyRAIS8v7Ph85KyJ9zYdgwoqkp7F+smEJEdDKboHE5LA49IDQk4cgkR5xNEMtxANpJm+AXNAhQOPVl/w57vI/Z+TBtSvDoj8LuAvKjmmrPfok2iyD2IIlbctcw8ypn1revZwDb1rBFefpbbZdr5h+75tVqqmNebzxk6UQsfL++lU8HscWwYKzxrrom5aJL6wxNTfy7/Htkt4FHzoKAc5gcB2KM/q0s6NvZzX9WtdHHwAR1kib2EekssjDM9VLecX75Xhtbp+LrHOJKRnxbIanXos4UZUzaJctdNTcOYzEVLvV0BCYaktbI+uVvJcC0qo28bXbHdS3rTGRu8CsykFneJXnrrRIJw7mYWhJSTV9bf+6j/lnFNAurbiYmd4SzaTgbGjj2j38Gr/CTsyv8Rho7P3QUWbRRZnn4a7eVPtjGagqwIwS59YDxRcOy2Wdsw35ry/N2G802V7Cr3hUqeaAIev2adtn4FaG72C8enacYUeACPEhi7TYdsDzuuyt31W7AQa5Te4Uda20rTa0Y9N5Lw85uGB2ebbdYWlO2CqI/m+xNYcPkKqL7zZILz782jDw1sxWd/RUbEgJNrWjsKZ7ybiEMmhpw5vLiMGOeqQWIT6cBCNjocmW0ocU+FBLhhioyrvuZOyacoEZLoklatsL0DMkvvkbT0Ew== petru@paler.net"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH+QbeQG/gTPJ2sIMPgZ3ZPEirVo5qX/carbZMKt50YN petru@happy"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDZjL47pUIks2caErnbFYv+McJcWd+GSydzAXHZEtL8s JuiceSSH"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINqULSU2VWUXSrHzFhs9pdXWZPtP/RS9gx7zz/zD/GDG petru@Workshop"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOOQ2EcJ+T+7BItZl89oDYhq7ZW4B9KuQVCy2DuQaPKR ppetru@sparky"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFRYVOfrqk2nFSyiu7TzU23ql8D6TfXICFpMIEvPbNsc JuiceSSH"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINBIqK6+aPIbmviJPWP8PI/k8GmaC7RO8v2ENnsK8sJx ppetru@beefy"
];
};
}

View File

@@ -0,0 +1,55 @@
{ pkgs, ... }:
{
# Service to wait for DNS resolution to be actually functional
# This is needed because network-online.target and wait-online.service
# don't guarantee DNS works - they only check that interfaces are configured.
#
# Problem: NFS mounts using Consul DNS names (data-services.service.consul)
# fail at boot because DNS resolution isn't ready even though network is "online"
#
# Solution: Actively test DNS resolution before considering network truly ready
systemd.services.wait-for-dns-ready = {
description = "Wait for DNS resolution to be functional";
after = [
"systemd-networkd-wait-online.service"
"systemd-resolved.service"
"network-online.target"
];
wants = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = pkgs.writeShellScript "wait-for-dns-ready" ''
# Test DNS resolution by attempting to resolve data-services.service.consul
# This ensures the full DNS path works: interface gateway Consul DNS
echo "Waiting for DNS resolution to be ready..."
for i in {1..30}; do
# Use getent which respects /etc/nsswitch.conf and systemd-resolved
if ${pkgs.glibc.bin}/bin/getent hosts data-services.service.consul >/dev/null 2>&1; then
echo "DNS ready: data-services.service.consul resolved successfully"
exit 0
fi
# Also test a public DNS name to distinguish between general DNS failure
# vs Consul-specific issues (helpful for debugging)
if ! ${pkgs.glibc.bin}/bin/getent hosts www.google.com >/dev/null 2>&1; then
echo "Attempt $i/30: General DNS not working yet, waiting..."
else
echo "Attempt $i/30: General DNS works but Consul DNS not ready yet, waiting..."
fi
sleep 1
done
echo "Warning: DNS not fully ready after 30 seconds"
echo "NFS mounts with 'nofail' option will handle this gracefully"
exit 0 # Don't block boot - let nofail mounts handle DNS failures
'';
};
};
}

35
common/wifi.nix Normal file
View File

@@ -0,0 +1,35 @@
{ config, lib, ... }:
{
sops.secrets.wifi-password-pi = {
sopsFile = ./../secrets/wifi.yaml;
};
networking.wireless = {
enable = true;
secretsFile = config.sops.secrets.wifi-password-pi.path;
networks = {
"pi" = {
pskRaw = "ext:pi";
};
};
# Only enable on wireless interface, not ethernet
interfaces = [ "wlan0" ];
};
# Prefer wifi over ethernet, but keep ethernet as fallback
networking.dhcpcd.extraConfig = ''
# Prefer wlan0 over ethernet interfaces
interface wlan0
metric 100
interface eth0
metric 200
'';
# Persist wireless configuration across reboots (for impermanence)
environment.persistence.${config.custom.impermanence.persistPath} = {
files = [
"/etc/wpa_supplicant.conf"
];
};
}

View File

@@ -1,5 +1,12 @@
{ pkgs, inputs, ... }:
{
# Workstation profile: Development workstation configuration
# Adds development tools and emulation on top of minimal-node
imports = [
./minimal-node.nix
./unattended-encryption.nix
];
environment.systemPackages = with pkgs; [
wget
deploy-rs

55
docs/AUTH_SETUP.md Normal file
View File

@@ -0,0 +1,55 @@
# Authentication Setup
SSO for homelab services using OIDC.
## Architecture
**Pocket ID** (`pocket-id.v.paler.net`) - Lightweight OIDC provider, data in `/data/services/pocket-id`
**Traefik** - Uses `traefik-oidc-auth` plugin (v0.16.0) to protect services
- Plugin downloaded from GitHub at startup, cached in `/data/services/traefik/plugins-storage`
- Middleware config in `/data/services/traefik/rules/middlewares.yml`
- Protected services add tag: `traefik.http.routers.<name>.middlewares=oidc-auth@file`
## Flow
1. User hits protected service → Traefik intercepts
2. Redirects to Pocket ID for login
3. Pocket ID returns OIDC token
4. Traefik validates and forwards with `X-Oidc-Username` header
## Protected Services
Use `oidc-auth@file` middleware (grep codebase for full list):
- Wikis (TiddlyWiki instances)
- Media stack (Radarr, Sonarr, Plex, etc.)
- Infrastructure (Traefik dashboard, Loki, Jupyter, Unifi)
## Key Files
- `services/pocket-id.hcl` - OIDC provider
- `services/traefik.hcl` - Plugin declaration
- `/data/services/traefik/rules/middlewares.yml` - Middleware definitions (oidc-auth, simple-auth fallback)
## Cold Start Notes
- Traefik needs internet to download plugin on first start
- Pocket ID needs `/data/services` NFS mounted
- Pocket ID down = all protected services inaccessible
## Troubleshooting
**Infinite redirects**: Check `TRUST_PROXY=true` on Pocket ID
**Plugin not loading**: Clear cache in `/data/services/traefik/plugins-storage/`, restart Traefik
**401 after login**: Verify client ID/secret in middlewares.yml matches Pocket ID client config
## Migration History
- Previous: Authentik with forwardAuth (removed Nov 2024)
- Current: Pocket ID + traefik-oidc-auth (simpler, lighter)
---
*Manage users/clients via Pocket ID UI. Basic auth fallback available via `simple-auth` middleware.*

206
docs/CICD_SETUP.md Normal file
View File

@@ -0,0 +1,206 @@
# CI/CD Setup for Nomad Services
Guide for adding automated builds and deployments to a service.
## Prerequisites
### 1. Service Repository
Your service needs a `flake.nix` that exports a Docker image:
```nix
{
outputs = { self, nixpkgs, ... }: {
# The workflow looks for this output by default
dockerImage = pkgs.dockerTools.buildImage {
name = "gitea.v.paler.net/alo/<service>";
tag = "latest";
# ... image config
};
};
}
```
**Important**: Use `extraCommands` instead of `runAsRoot` in your Docker build - the CI runner doesn't have KVM.
### 2. Nomad Job
Your job in `services/<name>.hcl` needs:
```hcl
job "<service>" {
# Required: UUID changes trigger deployments
meta {
uuid = uuidv4()
}
# Required: enables deployment tracking and auto-rollback
update {
max_parallel = 1
health_check = "checks"
min_healthy_time = "30s"
healthy_deadline = "5m"
auto_revert = true
}
# Required: pulls new image on each deployment
task "app" {
config {
force_pull = true
}
# Recommended: health check for deployment validation
service {
check {
type = "http"
path = "/healthz"
interval = "10s"
timeout = "5s"
}
}
}
}
```
## Quick Start
### 1. Create Workflow
Add `.gitea/workflows/deploy.yaml` to your service repo:
```yaml
name: Deploy
on:
push:
branches: [master]
workflow_dispatch:
jobs:
deploy:
uses: alo/alo-cluster/.gitea/workflows/deploy-nomad.yaml@master
with:
service_name: <your-service> # Must match Nomad job ID
secrets: inherit
```
### 2. Add Secrets
In Gitea → Your Repo → Settings → Actions → Secrets, add:
| Secret | Value |
|--------|-------|
| `REGISTRY_USERNAME` | Your Gitea username |
| `REGISTRY_PASSWORD` | Gitea access token with `packages:write` |
| `NOMAD_ADDR` | `http://nomad.service.consul:4646` |
### 3. Push
Push to `master` branch. The workflow will:
1. Build your Docker image with Nix
2. Push to Gitea registry
3. Update the Nomad job to trigger deployment
4. Monitor until deployment succeeds or fails
## Workflow Options
The shared workflow accepts these inputs:
| Input | Default | Description |
|-------|---------|-------------|
| `service_name` | (required) | Nomad job ID |
| `flake_output` | `dockerImage` | Flake output to build |
| `registry` | `gitea.v.paler.net` | Container registry |
Example with custom flake output:
```yaml
jobs:
deploy:
uses: alo/alo-cluster/.gitea/workflows/deploy-nomad.yaml@master
with:
service_name: myservice
flake_output: packages.x86_64-linux.docker
secrets: inherit
```
## How It Works
```
Push to master
Build: nix build .#dockerImage
Push: skopeo → gitea.v.paler.net/alo/<service>:latest
Deploy: Update job meta.uuid → Nomad creates deployment
Monitor: Poll deployment status for up to 5 minutes
Success: Deployment healthy
OR
Failure: Nomad auto-reverts to previous version
```
## Troubleshooting
### Build fails with KVM error
```
Required system: 'x86_64-linux' with features {kvm}
```
Use `extraCommands` instead of `runAsRoot` in your `docker.nix`:
```nix
# Bad - requires KVM
runAsRoot = ''
mkdir -p /tmp
'';
# Good - no KVM needed
extraCommands = ''
mkdir -p tmp
chmod 1777 tmp
'';
```
### No deployment created
Ensure your Nomad job has the `update` stanza with `auto_revert = true`.
### Image not updating
Check that `force_pull = true` is set in the Nomad job's Docker config.
### Deployment fails health checks
- Check your `/healthz` endpoint works
- Increase `healthy_deadline` if startup is slow
- Check `nomad alloc logs <alloc-id>` for errors
### Workflow can't access alo-cluster
If Gitea can't pull the reusable workflow, you may need to make alo-cluster public or use a token. As a fallback, copy the workflow content directly.
## Manual Deployment
If CI fails, you can deploy manually:
```bash
cd <service-repo>
nix build .#dockerImage
skopeo copy --dest-authfile ~/.docker/config.json \
docker-archive:result \
docker://gitea.v.paler.net/alo/<service>:latest
nomad run /path/to/alo-cluster/services/<service>.hcl
```
## Rollback
Nomad auto-reverts on health check failure. For manual rollback:
```bash
nomad job history <service> # List versions
nomad job revert <service> <version> # Revert to specific version
```

1717
docs/CLUSTER_REVAMP.md Normal file

File diff suppressed because it is too large Load Diff

288
docs/DIFF_CONFIGS.md Normal file
View File

@@ -0,0 +1,288 @@
# Configuration Diff Tool
Tool to compare all NixOS host configurations between current working tree and HEAD commit.
## Purpose
Before committing changes (especially refactors), verify that you haven't accidentally broken existing host configurations. This tool:
- Builds all host configurations in current state (with uncommitted changes)
- Builds all host configurations at HEAD (last commit)
- Uses `nvd` to show readable diffs for each host
- Highlights which hosts changed and which didn't
## Usage
### Prerequisites
The script requires `nvd` to be in PATH. Use either:
**Option 1: direnv (recommended)**
```bash
# Allow direnv in the repository (one-time setup)
direnv allow
# direnv will automatically load the dev shell when you cd into the directory
cd /home/ppetru/projects/alo-cluster
# nvd is now in PATH
```
**Option 2: nix develop**
```bash
# Enter dev shell manually
nix develop
# Now run the script
./scripts/diff-configs.sh
```
### Quick Start
```bash
# Compare all hosts (summary)
./scripts/diff-configs.sh
# Compare with detailed path listing
./scripts/diff-configs.sh -v c1
# Compare with content diffs of changed files (deep mode)
./scripts/diff-configs.sh --deep c1
# Compare only x86_64 hosts (avoid slow ARM cross-compilation)
./scripts/diff-configs.sh c1 c2 c3 zippy chilly sparky
# Verbose mode with multiple hosts
./scripts/diff-configs.sh --verbose c1 c2 c3
# Via flake app
nix run .#diff-configs
# Show help
./scripts/diff-configs.sh --help
```
### Typical Workflow
```bash
# 1. Make changes to configurations
vim common/impermanence.nix
# 2. Stage changes (required for flake to see them)
git add common/impermanence.nix
# 3. Check what would change if you committed now
# For quick feedback, compare only x86_64 hosts first:
./scripts/diff-configs.sh c1 c2 c3 zippy chilly sparky
# 4. Review output, make adjustments if needed
# 5. If changes look good and affect ARM hosts, check those too:
./scripts/diff-configs.sh stinky alo-cloud-1
# 6. Commit when satisfied
git commit -m "Refactor impermanence config"
```
## Output Explanation
### No Changes
```
━━━ c1 ━━━
Building current... done
Building HEAD... done
✓ No changes
```
This host's configuration is identical between current and HEAD.
### Changes Detected
```
━━━ stinky ━━━
Building current... done
Building HEAD... done
⚠ Configuration changed
<<< /nix/store/abc-nixos-system-stinky-25.05 (HEAD)
>>> /nix/store/xyz-nixos-system-stinky-25.05 (current)
Version changes:
[C] octoprint: 1.9.3 -> 1.10.0
[A+] libcamera: ∅ -> 0.1.0
Closure size: 1500 -> 1520 (5 paths added, 2 paths removed, +3, +15.2 MB)
```
Legend:
- `[C]` - Changed package version
- `[A+]` - Added package
- `[R-]` - Removed package
- `[U.]` - Updated (same version, rebuilt)
### Verbose Mode (--verbose)
With `-v` or `--verbose`, also shows the actual store paths that changed:
```
━━━ c1 ━━━
Building current... done
Building HEAD... done
⚠ Configuration changed
[nvd summary as above]
Changed store paths:
Removed (17 paths):
- config.fish
- system-units
- home-manager-generation
- etc-fuse.conf
... and 13 more
Added (17 paths):
- config.fish
- system-units
- home-manager-generation
- etc-fuse.conf
... and 13 more
```
This is useful when nvd shows "No version changes" but paths still changed (e.g., refactors that rebuild config files).
### Deep Mode (--deep)
With `-d` or `--deep`, shows actual content diffs of changed files within store paths (implies verbose):
```
━━━ c1 ━━━
Building current... done
Building HEAD... done
⚠ Configuration changed
[nvd summary and path listing as above]
Content diffs of changed files:
▸ etc-fuse.conf
@@ -1,2 +1,2 @@
-user_allow_other
+#user_allow_other
mount_max = 1000
▸ nixos-system-c1-25.05
activate:
@@ -108,7 +108,7 @@
echo "setting up /etc..."
-/nix/store/...-perl/bin/perl /nix/store/...-setup-etc.pl /nix/store/abc-etc/etc
+/nix/store/...-perl/bin/perl /nix/store/...-setup-etc.pl /nix/store/xyz-etc/etc
▸ unit-dbus.service
dbus.service:
@@ -1,5 +1,5 @@
[Service]
+Environment="LD_LIBRARY_PATH=/nix/store/.../systemd/lib"
Environment="LOCALE_ARCHIVE=..."
```
**What it shows**:
- Matches changed paths by basename (e.g., both have "config.fish")
- Diffs important files: activate scripts, etc/*, *.conf, *.fish, *.service, *.nix
- Shows unified diff format (lines added/removed)
- Limits to first 50 lines per file
**When to use**:
- When you need to know **what exactly changed** in config files
- Debugging unexpected configuration changes
- Reviewing refactors that don't change package versions
- Understanding why a host rebuilt despite "No version changes"
### Build Failures
```
━━━ broken-host ━━━
Building current... FAILED
Error: attribute 'foo' missing
```
If a host fails to build, the error is shown and the script continues with other hosts.
## How It Works
1. **Discovers hosts**: Queries `deploy.nodes` from flake to get all configured hosts
2. **Creates worktree**: Uses `git worktree` to check out HEAD in a temporary directory
3. **Builds configurations**: Builds `config.system.build.toplevel` for each host in both locations
4. **Compares with nvd**: Runs `nvd diff` to show package-level changes
5. **Cleans up**: Removes temporary worktree automatically
## Important Notes
### Git Staging Required
Flakes only evaluate files that are tracked by git. To make changes visible:
```bash
# Stage new files
git add new-file.nix
# Stage changes to existing files
git add modified-file.nix
# Or stage everything
git add .
```
Unstaged changes to tracked files **are** visible (flake uses working tree content).
### Performance
- First run may be slow (building all configurations)
- Subsequent runs benefit from Nix evaluation cache
- Typical runtime: 1-5 minutes depending on changes
- **ARM cross-compilation is slow**: Use host filtering to avoid building ARM hosts when not needed
- Example: `./scripts/diff-configs.sh c1 c2 c3` (x86_64 only, fast)
- vs `./scripts/diff-configs.sh` (includes stinky/alo-cloud-1, slow)
### When to Use
**Good use cases**:
- Refactoring shared modules (like impermanence)
- Updating common configurations
- Before committing significant changes
- Verifying deploy target consistency
**Not needed for**:
- Adding a single new host
- Trivial one-host changes
- Documentation updates
## Troubleshooting
### "Not in a git repository"
```bash
cd /home/ppetru/projects/alo-cluster
./scripts/diff-configs.sh
```
### "No changes detected"
All changes are already committed. Stage some changes first:
```bash
git add .
```
### Build failures for all hosts
Check flake syntax:
```bash
nix flake check
```
### nvd not found
Install nvd:
```bash
nix profile install nixpkgs#nvd
```
(Already included in workstation-node.nix packages)
## Related Tools
- `nvd` - Package diff tool (used internally)
- `nix diff-closures` - Low-level closure diff
- `nix store diff-closures` - Alternative diff command
- `deploy-rs` - Actual deployment tool
## See Also
- `common/global/show-changelog.nix` - Shows changes during system activation
- `docs/RASPBERRY_PI_SD_IMAGE.md` - SD image building process

354
docs/HOMELAB_AGENT.md Normal file
View File

@@ -0,0 +1,354 @@
# ABOUTME: Vision and design document for an AI agent that manages the homelab cluster.
# ABOUTME: Covers emergent capabilities, technical approach, and implementation strategy.
# Homelab Agent: Vision and Design
## The Core Idea
Not automation. Not "LLM-powered autocomplete for infrastructure." Emergent capabilities.
The same shift Claude Code brought to programming: you describe outcomes, it handles implementation. You become a "product manager" for your infrastructure instead of an "infrastructure engineer."
The cluster stops being infrastructure you manage and becomes an environment that responds to intent.
## What Makes This Different From Automation
**Automation**: "If disk > 90%, delete old logs"
**Emergent**: "Disk is 95% full. What's using space? ...Postgres WAL. Can I safely checkpoint? Last backup was 2h ago, load is low, yes. Running checkpoint... down to 60%. I should note that WAL retention might need tuning."
The difference:
- Novel problem-solving (not pattern matching)
- Contextual safety reasoning
- Adaptation to the specific situation
- Learning for the future
## Examples of Genuinely New Capabilities
### 1. Intent-Driven Infrastructure
> "I want to run Synapse for Matrix"
Agent figures out: Nomad job spec, storage location, Traefik routing, TLS, Consul registration, backup config. Creates it, deploys it, validates it.
You don't need to know Nomad job format or Traefik labels. You describe the outcome.
### 2. Proactive Evolution (The Best One)
The agent doesn't wait for problems or instructions:
- "Synapse 1.98 has a security fix. I've tested it in a local build, no config changes needed. Deploy?"
- "Your NFS server has been primary for 47 days. Want me to test failover to make sure it still works?"
- "I noticed arr services all have the same resource limits but Sonarr consistently uses more. Adjusted."
- "There's a new NixOS module for Traefik that simplifies your current setup. Here's the diff."
Not monitoring. Stewardship.
### 3. The Cluster Has Opinions
> You: "I want to add Plex"
>
> Agent: "You already have Jellyfin, which does the same thing. If you want Plex specifically for its mobile app, I can set it up to share Jellyfin's media library. Or if you want to switch entirely, I can migrate watch history. What's the actual goal?"
Not a command executor. A collaborator that understands your system.
### 4. "Bring This Into the Cluster"
You're running something in Docker on a random VM:
> "Bring this into the cluster"
Agent: connects, inspects, figures out dependencies, writes Nomad job, sets up storage, migrates data, routes traffic, validates, decommissions old instance.
You didn't need to know how.
### 5. Cross-Cutting Changes
> "Add authentication to all public-facing services"
Agent identifies which services are public, understands the auth setup (Pocket ID + traefik-oidc-auth), modifies each service's config, tests that auth works.
Single coherent change across everything, without knowing every service yourself.
### 6. Emergent Debugging
Not runbooks. Actual reasoning:
> "The blog is slow"
Agent checks service health (fine), node resources (fine), network latency (fine), database queries (ah, slow query), traces to missing index, adds index, validates performance improved.
Solved a problem nobody wrote a runbook for.
### 7. Architecture Exploration
> "What if we added a third Nomad server for better quorum?"
Agent reasons about current topology, generates the config, identifies what would change, shows blast radius. Thinking partner for infrastructure decisions.
## Why Nix Makes This Possible
Traditional infrastructure: state is scattered and implicit. Nix: everything is declared.
- **Full system understanding** - agent can read the flake and understand EVERYTHING
- **Safe experimentation** - build without deploying, rollback trivially
- **Reproducibility** - "what was the state 3 days ago?" can be rebuilt exactly
- **Composition** - agent can generate valid configs that compose correctly
- **The ecosystem** - 80k+ packages, thousands of modules the agent can navigate
> "I want a VPN that works with my phone"
Agent knows Nix, finds WireGuard module, configures it, generates QR codes, opens firewall. You didn't learn WireGuard.
## The Validation Pattern
Just like code has linting and tests, infrastructure actions need validation:
| Phase | Code | Infrastructure |
|-------|------|----------------|
| Static | Lint, typecheck | Config parses, secrets exist, no port conflicts |
| Pre-flight | — | Cluster healthy, dependencies up, quorum intact |
| Post-action | Unit tests | Service started, health checks pass, metrics flowing |
| Invariants | CI | NFS mounted, Consul quorum, replication current |
The agent can take actions confidently because it validates outcomes.
## The Reality Check
Some of this works today. Some would fail spectacularly. Some would fail silently and idiotically. Just like Claude Code for coding.
Therefore:
- Tight loop with the human operator
- Assume the human is competent and knowledgeable
- Agent amplifies expertise, doesn't replace it
- Escalate when uncertain
## Technical Approach
### Runtime: Claude Code (Not Agent SDK)
Two options were considered:
| Tool | Pro/Max Subscription | API Billing |
|------|---------------------|-------------|
| Claude Code CLI | Yes | Yes |
| Claude Agent SDK | No | Required |
Claude Code can use existing Max subscription. Agent SDK requires separate API billing.
For v1, use Claude Code as the runtime:
```bash
claude --print "prompt" \
--allowedTools "Bash,Read,Edit" \
--permission-mode acceptEdits
```
Graduate to Agent SDK later if limitations are hit.
### Trigger Architecture
On-demand Claude Code sessions, triggered by:
- **Timer** - periodic health/sanity check
- **Alert** - alertmanager webhook
- **Event** - systemd OnFailure, consul watch
- **Manual** - invoke with a goal
Each trigger provides context and a goal. Claude Code does the rest.
### Structure
```
agent/
├── triggers/
│ ├── scheduled-check # systemd timer
│ ├── on-alert # webhook handler
│ └── on-failure # systemd OnFailure target
├── gather-context.sh # snapshot of cluster state
└── goals/
├── health-check.md # verify health, fix if safe
├── incident.md # investigate alert, fix or escalate
└── proactive.md # look for improvements
```
### Example: Scheduled Health Check
```bash
#!/usr/bin/env bash
CONTEXT=$(./gather-context.sh)
GOAL=$(cat goals/health-check.md)
claude --print "
## Context
$CONTEXT
## Goal
$GOAL
## Constraints
- You can read any file in this repo
- You can run nomad/consul/systemctl commands
- You can edit Nix/HCL files and run deploy
- Before destructive actions, validate with nix build or nomad plan
- If uncertain about safety, output a summary and stop
"
```
### Context Gathering
```bash
#!/usr/bin/env bash
echo "=== Nomad Jobs ==="
nomad job status
echo "=== Consul Members ==="
consul members
echo "=== Failed Systemd Units ==="
systemctl --failed
echo "=== Recent Errors (last hour) ==="
journalctl --since "1 hour ago" -p err --no-pager | tail -100
```
## Edge Cases and the Nix Promise
The NixOS promise mostly works, but sometimes doesn't:
- Mount option changes that require reboot
- Transition states where switch fails even if end state is correct
- Partial application where switch "succeeds" but change didn't take effect
This is where the agent adds value: it can detect when a change needs special handling, apply the appropriate strategy, and verify the change actually took effect.
## Capturing Knowledge
Document edge cases as they're discovered:
```markdown
## CIFS/NFS mount option changes
Switch may fail or succeed without effect. Strategy:
1. Try normal deploy
2. If mount options don't match after, reboot required
3. If deploy fails with mount busy, local switch + reboot
```
The agent reads this, uses it as context, but can also reason about novel situations.
## Path to CI/CD
Eventually: push to main triggers deploy via agent.
```
push to main
|
build all configs (mechanical)
|
agent: "what changed? is this safe to auto-deploy?"
|
├─ clean change -> deploy, validate, done
├─ needs reboot -> deploy, schedule reboot, validate after
├─ risky change -> notify for manual approval
└─ failed -> diagnose, retry with different strategy, or escalate
|
post-deploy verification
|
notification
```
The agent is the intelligence layer on top of mechanical CI/CD.
## Research: What Others Are Doing (January 2026)
### Existing Projects & Approaches
**n8n + Ollama Stack**
The most common pattern is n8n (workflow orchestration) + Ollama (local LLM). Webhooks from
monitoring (Netdata/Prometheus) trigger AI-assisted diagnosis. Philosophy from one practitioner:
"train an employee, not a bot" — build trust, gradually grant autonomy.
Sources:
- [Virtualization Howto: Self-Healing Home Lab](https://www.virtualizationhowto.com/2025/10/how-i-built-a-self-healing-home-lab-that-fixes-itself/)
- [addROM: AI Agent for Homelab with n8n](https://addrom.com/unleashing-the-power-of-an-ai-agent-for-homelab-management-with-n8n/)
**Local Infrastructure Agent (Kelcode)**
Architecture: user question → tool router → query processor → LLM response. Connects to
Kubernetes, Prometheus, Harbor Registry.
Key insight: "The AI's output definition must be perfectly synchronized with the software
it's trying to use." Their K8s tool failed because the prompt generated kubectl commands
while the code expected structured data objects.
Uses phi4-mini via Ollama for routing decisions after testing multiple models.
Source: [Kelcode: Building a Homelab Agentic Ecosystem](https://kelcode.co.uk/building-a-homelab-agentic-ecosystem-part1/)
**nixai**
AI assistant specifically for NixOS. Searches NixOS Wiki, Nixpkgs Manual, nix.dev, Home Manager
docs. Diagnoses issues from piped logs/errors. Privacy-first: defaults to local Ollama.
Limited scope — helper tool, not autonomous agent. But shows NixOS-specific tooling is possible.
Source: [NixOS Discourse: Introducing nixai](https://discourse.nixos.org/t/introducing-nixai-your-ai-powered-nixos-companion/65168)
**AI-Friendly Infrastructure (The Merino Wolf)**
Key insight: make infrastructure "AI-friendly" through structured documentation. CLAUDE.md
provides comprehensive context — "structured knowledge transfer."
Lessons:
- "Context investment pays dividends" — comprehensive documentation is the most valuable asset
- Layered infrastructure design mirrors how both humans and AI think
- Rule-based guidance enforces safety practices automatically
Source: [The Merino Wolf: AI-Powered Homelab](https://themerinowolf.com/posts/ai-powered-homelab/)
**Claude Code Infrastructure Patterns**
Solves "skills don't activate automatically" problem using hooks (UserPromptSubmit, PostToolUse)
+ skill-rules.json for auto-activation.
500-line rule with progressive disclosure: main file for high-level guidance, resource files
for deep dives. Claude loads materials incrementally as needed.
Persistence pattern across context resets using three-file structures (plan, context, tasks).
Born from 6 months managing TypeScript microservices (50k+ lines).
Source: [diet103/claude-code-infrastructure-showcase](https://github.com/diet103/claude-code-infrastructure-showcase)
### Patterns That Work
- Local LLMs (Ollama) + workflow orchestration (n8n) is the popular stack
- Start with read-only/diagnostic agents, gradually add write access
- Pre-approved command lists for safety (e.g., 50 validated bash commands max)
- Structured documentation as foundation — AI is only as good as its context
- Multi-step tool use: agent plans, then executes steps, observing results
### What's Missing in the Space
- Nobody's doing true "emergent capabilities" yet — mostly tool routing
- Most projects are Kubernetes/Docker focused, not NixOS
- Few examples of proactive stewardship (our example #2)
- Limited examples of agents that understand the whole system coherently
### Community Skepticism
From Reddit discussions: doubts exist about using LLM agents in production. Although LLMs can
automate specific tasks, they frequently need human involvement for intricate decision-making.
This validates our approach: tight loop with a competent human, not autonomous operation.
### The Gap We'd Fill
- NixOS-native agent leveraging declarative config as source of truth
- True emergence — not just tool routing, but reasoning about novel situations
- Proactive evolution, not just reactive troubleshooting
- Tight human loop with a competent operator
## Next Steps
1. Build trigger infrastructure (systemd timer, basic webhook handler)
2. Write context gathering scripts
3. Define goal prompts for common scenarios
4. Test with scheduled health checks
5. Iterate based on what works and what doesn't
6. Document edge cases as they're discovered
7. Gradually expand scope as confidence grows

160
docs/MIGRATION_TODO.md Normal file
View File

@@ -0,0 +1,160 @@
# Cluster Revamp Migration TODO
Track migration progress from GlusterFS to NFS-based architecture.
See [CLUSTER_REVAMP.md](./CLUSTER_REVAMP.md) for detailed procedures.
## Phase 0: Preparation
- [x] Review cluster revamp plan
- [ ] Backup everything (kopia snapshots current)
- [ ] Document current state (nomad jobs, consul services)
## Phase 1: Convert fractal to NixOS (DEFERRED - do after GlusterFS migration)
- [ ] Document fractal's current ZFS layout
- [ ] Install NixOS on fractal
- [ ] Import ZFS pools (double1, double2, double3)
- [ ] Create fractal NixOS configuration
- [ ] Configure Samba server for media/shared/homes
- [ ] Configure Kopia backup server
- [ ] Deploy and verify fractal base config
- [ ] Join fractal to cluster (5-server quorum)
- [ ] Update all cluster configs for 5-server quorum
- [ ] Verify fractal fully operational
## Phase 2: Setup zippy storage layer
- [x] Create btrfs subvolume `/persist/services` on zippy
- [x] Configure NFS server on zippy (nfs-services-server.nix)
- [x] Configure Consul service registration for NFS
- [x] Setup btrfs replication to c1 (incremental, 5min intervals)
- [x] Fix replication script to handle SSH command restrictions
- [x] Setup standby storage on c1 (`/persist/services-standby`)
- [x] Configure c1 as standby (nfs-services-standby.nix)
- [x] Configure Kopia to exclude replication snapshots
- [x] Deploy and verify NFS server on zippy
- [x] Verify replication working to c1
- [ ] Setup standby storage on c2 (if desired)
- [ ] Configure replication to c2 (if desired)
## Phase 3: Migrate from GlusterFS to NFS
- [x] Update all nodes to mount NFS at `/data/services`
- [x] Deploy updated configs (NFS client on all nodes)
- [x] Stop all Nomad jobs temporarily
- [x] Copy data from GlusterFS to zippy NFS
- [x] Copy `/data/compute/appdata/*``/persist/services/appdata/`
- [x] Copy `/data/compute/config/*``/persist/services/config/`
- [x] Copy `/data/sync/wordpress``/persist/services/appdata/wordpress`
- [x] Verify data integrity
- [x] Verify NFS mounts working on all nodes
- [x] Stop GlusterFS volume
- [x] Delete GlusterFS volume
- [x] Remove GlusterFS from NixOS configs
- [x] Remove syncthing wordpress sync configuration (no longer used)
## Phase 4: Update and redeploy Nomad jobs
### Core Infrastructure (CRITICAL)
- [x] mysql.hcl - moved to zippy, using `/data/services`
- [x] postgres.hcl - migrated to `/data/services`
- [x] redis.hcl - migrated to `/data/services`
- [x] traefik.hcl - migrated to `/data/services`
- [x] authentik.hcl - stateless, no changes needed
### Monitoring Stack (HIGH)
- [x] prometheus.hcl - migrated to `/data/services`
- [x] grafana.hcl - migrated to `/data/services` (2025-10-23)
- [x] loki.hcl - migrated to `/data/services`
- [x] vector.hcl - removed glusterfs log collection (2025-10-23)
### Databases (HIGH)
- [x] clickhouse.hcl - migrated to `/data/services`
- [x] unifi.hcl - migrated to `/data/services` (includes mongodb)
### Web Applications (HIGH-MEDIUM)
- [x] wordpress.hcl - migrated to `/data/services`
- [x] gitea.hcl - migrated to `/data/services` (2025-10-23)
- [x] wiki.hcl - migrated to `/data/services` (2025-10-23)
- [x] plausible.hcl - stateless, no changes needed
### Web Applications (LOW, may be deprecated)
- [x] vikunja.hcl - migrated to `/data/services` (2025-10-23, not running)
### Media Stack (MEDIUM)
- [x] media.hcl - migrated to `/data/services`
### Utility Services (MEDIUM-LOW)
- [x] evcc.hcl - migrated to `/data/services`
- [x] weewx.hcl - migrated to `/data/services` (2025-10-23)
- [x] code-server.hcl - migrated to `/data/services`
- [x] beancount.hcl - migrated to `/data/services`
- [x] adminer.hcl - stateless, no changes needed
- [x] maps.hcl - migrated to `/data/services`
- [x] netbox.hcl - migrated to `/data/services`
- [x] farmos.hcl - migrated to `/data/services` (2025-10-23)
- [x] urbit.hcl - migrated to `/data/services`
- [x] webodm.hcl - migrated to `/data/services` (2025-10-23, not running)
- [x] velutrack.hcl - migrated to `/data/services`
- [x] resol-gateway.hcl - migrated to `/data/services` (2025-10-23)
- [x] igsync.hcl - migrated to `/data/services` (2025-10-23)
- [x] jupyter.hcl - migrated to `/data/services` (2025-10-23, not running)
- [x] whoami.hcl - stateless test service, no changes needed
### Backup Jobs (HIGH)
- [x] mysql-backup - moved to zippy, verified
- [x] postgres-backup.hcl - migrated to `/data/services`
### Host Volume Definitions (CRITICAL)
- [x] common/nomad.nix - consolidated `appdata` and `code` volumes into single `services` volume (2025-10-23)
### Verification
- [ ] All services healthy in Nomad
- [ ] All services registered in Consul
- [ ] Traefik routes working
- [ ] Database jobs running on zippy (verify via nomad alloc status)
- [ ] Media jobs running on fractal (verify via nomad alloc status)
## Phase 5: Convert sunny to NixOS (OPTIONAL - can defer)
- [ ] Document current sunny setup (ethereum containers/VMs)
- [ ] Backup ethereum data
- [ ] Install NixOS on sunny
- [ ] Restore ethereum data to `/persist/ethereum`
- [ ] Create sunny container-based config (besu, lighthouse, rocketpool)
- [ ] Deploy and verify ethereum stack
- [ ] Monitor sync status and validation
## Phase 6: Verification and cleanup
- [ ] Test NFS failover procedure (zippy → c1)
- [ ] Verify backups include `/persist/services` data
- [ ] Verify backups exclude replication snapshots
- [ ] Update documentation (README.md, architecture diagrams)
- [x] Clean up old GlusterFS data (only after everything verified!)
- [x] Remove old glusterfs directories from all nodes
## Post-Migration Checklist
- [ ] All 5 servers in quorum (consul members)
- [ ] NFS mounts working on all nodes
- [ ] Btrfs replication running (check systemd timers on zippy)
- [ ] Critical services up (mysql, postgres, redis, traefik, authentik)
- [ ] Monitoring working (prometheus, grafana, loki)
- [ ] Media stack on fractal
- [ ] Database jobs on zippy
- [ ] Consul DNS working (dig @localhost -p 8600 data-services.service.consul)
- [ ] Backups running (kopia snapshots include /persist/services)
- [ ] GlusterFS removed (no processes, volumes deleted)
- [ ] Documentation updated
---
**Last updated**: 2025-10-25
**Current phase**: Phase 3 & 4 complete! GlusterFS removed, all services on NFS
**Note**: Phase 1 (fractal NixOS conversion) deferred until after GlusterFS migration is complete
## Migration Summary
**All services migrated to `/data/services` (30 total):**
mysql, mysql-backup, postgres, postgres-backup, redis, clickhouse, prometheus, grafana, loki, vector, unifi, wordpress, gitea, wiki, traefik, evcc, weewx, netbox, farmos, webodm, jupyter, vikunja, urbit, code-server, beancount, velutrack, maps, media, resol-gateway, igsync
**Stateless/no changes needed (4 services):**
authentik, adminer, plausible, whoami
**Configuration changes:**
- common/nomad.nix: consolidated `appdata` and `code` volumes into single `services` volume
- vector.hcl: removed glusterfs log collection

438
docs/NFS_FAILOVER.md Normal file
View File

@@ -0,0 +1,438 @@
# NFS Services Failover Procedures
This document describes how to fail over the `/data/services` NFS server between hosts and how to fail back.
## Architecture Overview
- **Primary NFS Server**: Typically `zippy`
- Exports `/persist/services` via NFS
- Has local bind mount: `/data/services``/persist/services` (same path as clients)
- Registers `data-services.service.consul` in Consul
- Sets Nomad node meta: `storage_role = "primary"`
- Replicates snapshots to standbys every 5 minutes via btrfs send
- **Safety check**: Refuses to start if another NFS server is already active in Consul
- **Standby**: Typically `c1`
- Receives snapshots at `/persist/services-standby/services@<timestamp>`
- Can be promoted to NFS server during failover
- No special Nomad node meta (not primary)
- **Clients**: All cluster nodes (c1, c2, c3, zippy)
- Mount `/data/services` from `data-services.service.consul:/persist/services`
- Automatically connect to whoever is registered in Consul
### Nomad Job Constraints
Jobs that need to run on the primary storage node should use:
```hcl
constraint {
attribute = "${meta.storage_role}"
value = "primary"
}
```
This is useful for:
- Database jobs (mysql, postgres, redis) that benefit from local storage
- Jobs that need guaranteed fast disk I/O
During failover, the `storage_role = "primary"` meta attribute moves to the new NFS server, and Nomad automatically reschedules constrained jobs to the new primary.
## Prerequisites
- Standby has been receiving snapshots (check: `ls /persist/services-standby/services@*`)
- Last successful replication was recent (< 5-10 minutes)
---
## Failover: Promoting Standby to Primary
**Scenario**: `zippy` is down and you need to promote `c1` to be the NFS server.
### Step 1: Choose Latest Snapshot
On the standby (c1):
```bash
ssh c1
sudo ls -lt /persist/services-standby/services@* | head -5
```
Find the most recent snapshot. Note the timestamp to estimate data loss (typically < 5 minutes).
### Step 2: Promote Snapshot to Read-Write Subvolume
On c1:
```bash
# Find the latest snapshot
LATEST=$(sudo ls -t /persist/services-standby/services@* | head -1)
# Create writable subvolume from snapshot
sudo btrfs subvolume snapshot "$LATEST" /persist/services
# Verify
ls -la /persist/services
```
### Step 3: Update NixOS Configuration
Edit your configuration to swap the NFS server role:
**In `hosts/c1/default.nix`**:
```nix
imports = [
# ... existing imports ...
# ../../common/nfs-services-standby.nix # REMOVE THIS
../../common/nfs-services-server.nix # ADD THIS
];
# Add standbys if desired (optional - can leave empty during emergency)
nfsServicesServer.standbys = []; # Or ["c2"] to add a new standby
```
**Optional: Prepare zippy config for when it comes back**:
In `hosts/zippy/default.nix` (can do this later too):
```nix
imports = [
# ... existing imports ...
# ../../common/nfs-services-server.nix # REMOVE THIS
../../common/nfs-services-standby.nix # ADD THIS
];
# Add the replication key from c1 (get it from c1:/persist/root/.ssh/btrfs-replication.pub)
nfsServicesStandby.replicationKeys = [
"ssh-ed25519 AAAA... root@c1-replication"
];
```
### Step 4: Deploy Configuration
```bash
# From your workstation
deploy -s '.#c1'
# If zippy is still down, updating its config will fail, but that's okay
# You can update it later when it comes back
```
### Step 5: Verify NFS Server is Running
On c1:
```bash
sudo systemctl status nfs-server
sudo showmount -e localhost
dig @localhost -p 8600 data-services.service.consul # Should show c1's IP
```
### Step 6: Verify Clients Can Access
From any node:
```bash
df -h | grep services
ls /data/services
```
The mount should automatically reconnect via Consul DNS.
### Step 7: Check Nomad Jobs
```bash
nomad job status mysql
nomad job status postgres
# Verify critical services are healthy
# Jobs constrained to ${meta.storage_role} = "primary" will automatically
# reschedule to c1 once it's deployed with the NFS server module
```
**Recovery Time Objective (RTO)**: ~10-15 minutes
**Recovery Point Objective (RPO)**: Last replication interval (5 minutes max)
**Note**: Jobs with the `storage_role = "primary"` constraint will automatically move to c1 because it now has that node meta attribute. No job spec changes needed!
---
## What Happens When zippy Comes Back?
**IMPORTANT**: If zippy reboots while still configured as NFS server, it will **refuse to start** the NFS service because it detects c1 is already active in Consul.
You'll see this error in `journalctl -u nfs-server`:
```
ERROR: Another NFS server is already active at 192.168.1.X
This host (192.168.1.2) is configured as NFS server but should be standby.
To fix:
1. If this is intentional (failback), first demote the other server
2. Update this host's config to use nfs-services-standby.nix instead
3. Sync data from active server before promoting this host
```
This is a **safety feature** to prevent split-brain and data corruption.
### Options when zippy comes back:
**Option A: Keep c1 as primary** (zippy becomes standby)
1. Update zippy's config to use `nfs-services-standby.nix`
2. Deploy to zippy
3. c1 will start replicating to zippy
**Option B: Fail back to zippy as primary**
Follow the "Failing Back to Original Primary" procedure below.
---
## Failing Back to Original Primary
**Scenario**: `zippy` is repaired and you want to move the NFS server role back from `c1` to `zippy`.
### Step 1: Sync Latest Data from c1 to zippy
On c1 (current primary):
```bash
# Create readonly snapshot of current state
sudo btrfs subvolume snapshot -r /persist/services /persist/services@failback-$(date +%Y%m%d-%H%M%S)
# Find the snapshot
FAILBACK=$(sudo ls -t /persist/services@failback-* | head -1)
# Send to zippy (use root SSH key if available, or generate temporary key)
sudo btrfs send "$FAILBACK" | ssh root@zippy "btrfs receive /persist/"
```
On zippy:
```bash
# Verify snapshot arrived
ls -la /persist/services@failback-*
# Create writable subvolume from the snapshot
FAILBACK=$(ls -t /persist/services@failback-* | head -1)
sudo btrfs subvolume snapshot "$FAILBACK" /persist/services
# Verify
ls -la /persist/services
```
### Step 2: Update NixOS Configuration
Swap the roles back:
**In `hosts/zippy/default.nix`**:
```nix
imports = [
# ... existing imports ...
# ../../common/nfs-services-standby.nix # REMOVE THIS
../../common/nfs-services-server.nix # ADD THIS
];
nfsServicesServer.standbys = ["c1"];
```
**In `hosts/c1/default.nix`**:
```nix
imports = [
# ... existing imports ...
# ../../common/nfs-services-server.nix # REMOVE THIS
../../common/nfs-services-standby.nix # ADD THIS
];
nfsServicesStandby.replicationKeys = [
"ssh-ed25519 AAAA... root@zippy-replication" # Get from zippy:/persist/root/.ssh/btrfs-replication.pub
];
```
### Step 3: Deploy Configurations
```bash
# IMPORTANT: Deploy c1 FIRST to demote it
deploy -s '.#c1'
# Wait for c1 to stop NFS server
ssh c1 sudo systemctl status nfs-server # Should be inactive
# Then deploy zippy to promote it
deploy -s '.#zippy'
```
The order matters! If you deploy zippy first, it will see c1 is still active and refuse to start.
### Step 4: Verify Failback
Check Consul DNS points to zippy:
```bash
dig @c1 -p 8600 data-services.service.consul # Should show zippy's IP
```
Check clients are mounting from zippy:
```bash
for host in c1 c2 c3; do
ssh $host "df -h | grep services"
done
```
### Step 5: Clean Up Temporary Snapshots
On c1:
```bash
# Remove the failback snapshot and the promoted subvolume
sudo btrfs subvolume delete /persist/services@failback-*
sudo btrfs subvolume delete /persist/services
```
---
## Adding a New Standby
**Scenario**: You want to add `c2` as an additional standby.
### Step 1: Create Standby Subvolume on c2
```bash
ssh c2
sudo btrfs subvolume create /persist/services-standby
```
### Step 2: Update c2 Configuration
**In `hosts/c2/default.nix`**:
```nix
imports = [
# ... existing imports ...
../../common/nfs-services-standby.nix
];
nfsServicesStandby.replicationKeys = [
"ssh-ed25519 AAAA... root@zippy-replication" # Get from current NFS server
];
```
### Step 3: Update NFS Server Configuration
On the current NFS server (e.g., zippy), update the standbys list:
**In `hosts/zippy/default.nix`**:
```nix
nfsServicesServer.standbys = ["c1" "c2"]; # Added c2
```
### Step 4: Deploy
```bash
deploy -s '.#c2'
deploy -s '.#zippy'
```
The next replication cycle (within 5 minutes) will do a full send to c2, then switch to incremental.
---
## Troubleshooting
### Replication Failed
Check the replication service logs:
```bash
# On NFS server
sudo journalctl -u replicate-services-to-c1 -f
```
Common issues:
- SSH key not found → Run key generation step (see stateful-commands.txt)
- Permission denied → Check authorized_keys on standby
- Snapshot already exists → Old snapshot with same timestamp, wait for next cycle
### Clients Can't Mount
Check Consul:
```bash
dig @localhost -p 8600 data-services.service.consul
consul catalog services | grep data-services
```
If Consul isn't resolving:
- NFS server might not have registered → Check `sudo systemctl status nfs-server`
- Consul agent might be down → Check `sudo systemctl status consul`
### Mount is Stale
Force remount:
```bash
sudo systemctl restart data-services.mount
```
Or unmount and let automount handle it:
```bash
sudo umount /data/services
ls /data/services # Triggers automount
```
### Split-Brain Prevention: NFS Server Won't Start
If you see:
```
ERROR: Another NFS server is already active at 192.168.1.X
```
This is **intentional** - the safety check is working! You have two options:
1. **Keep the other server as primary**: Update this host's config to be a standby instead
2. **Fail back to this host**: First demote the other server, sync data, then deploy both hosts in correct order
---
## Monitoring
### Check Replication Status
On NFS server:
```bash
# List recent snapshots
ls -lt /persist/services@* | head
# Check last replication run
sudo systemctl status replicate-services-to-c1
# Check replication logs
sudo journalctl -u replicate-services-to-c1 --since "1 hour ago"
```
On standby:
```bash
# List received snapshots
ls -lt /persist/services-standby/services@* | head
# Check how old the latest snapshot is
stat /persist/services-standby/services@* | grep Modify | head -1
```
### Verify NFS Exports
```bash
sudo showmount -e localhost
```
Should show:
```
/persist/services 192.168.1.0/24
```
### Check Consul Registration
```bash
consul catalog services | grep data-services
dig @localhost -p 8600 data-services.service.consul
```

View File

@@ -0,0 +1,98 @@
# Raspberry Pi SD Image Building and Deployment
Guide for building and deploying NixOS SD card images for Raspberry Pi hosts (e.g., stinky).
## Overview
Raspberry Pi hosts use a different deployment strategy than regular NixOS hosts:
- **First deployment**: Build and flash an SD card image
- **Subsequent updates**: Use `deploy-rs` like other hosts
## Architecture
### Storage Layout
**Partition structure** (automatically created by NixOS):
- `/boot/firmware` - FAT32 partition (label: `FIRMWARE`)
- Contains Raspberry Pi firmware, U-Boot bootloader, device trees
- `/` - tmpfs (in-memory, ephemeral root)
- 2GB RAM disk, wiped on every boot
- `/nix` - ext4 partition (label: `NIXOS_SD`)
- Nix store and persistent data
- Contains `/nix/persist` directory for impermanence
### Impermanence with tmpfs
Unlike btrfs-based hosts that use `/persist`, Pi hosts use `/nix/persist`:
- Root filesystem is tmpfs (no disk writes, auto-wiped)
- Single ext4 partition mounted at `/nix`
- Persistent data stored in `/nix/persist/` (directory, not separate mount)
- Better for SD card longevity (fewer writes)
**Persisted paths**:
- `/nix/persist/var/lib/nixos` - System state
- `/nix/persist/home/ppetru` - User home directory
- `/nix/persist/etc` - SSH host keys, machine-id
- Service-specific: `/nix/persist/var/lib/octoprint`, etc.
## Building the SD Image
### Prerequisites
- ARM64 emulation enabled on build machine:
```nix
boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
```
(Already configured in `workstation-node.nix`)
### Build Command
```bash
# Build SD image for stinky
nix build .#packages.aarch64-linux.stinky-sdImage
# Result location
ls -lh result/sd-image/
# nixos-sd-image-stinky-25.05-*.img.zst (compressed with zstd)
```
**Build location**: Defined in `flake.nix`:
```nix
packages.aarch64-linux.stinky-sdImage =
self.nixosConfigurations.stinky.config.system.build.sdImage;
```
## Flashing the SD Card
### Find SD Card Device
```bash
# Before inserting SD card
lsblk
# Insert SD card, then check again
lsblk
# Look for new device, typically:
# - /dev/sdX (USB SD card readers)
# - /dev/mmcblk0 (built-in SD card slots)
```
**Warning**: Double-check the device! Wrong device = data loss.
### Flash Image
```bash
# Decompress and flash in one command
zstd -d -c result/sd-image/*.img.zst | sudo dd of=/dev/sdX bs=4M status=progress conv=fsync
# Or decompress first, then flash
unzstd result/sd-image/*.img.zst
sudo dd if=result/sd-image/*.img of=/dev/sdX bs=4M status=progress conv=fsync
```
### Eject SD Card
```bash
sudo eject /dev/sdX
```

7
docs/TODO Normal file
View File

@@ -0,0 +1,7 @@
* remote docker images used, can't come up if internet is down
* local docker images pulled from gitea, can't come up if gitea isn't up (yet)
* traefik-oidc-auth plugin downloaded from GitHub at startup (cached in /data/services/traefik/plugins-storage)
* renovate system of some kind
* vector (or other log ingestion) everywhere, consider moving it off docker if possible
* monitor backup-persist success/fail
* gitea organization is public -> at least from the internal network, anyone can pull images and probably also clone repos. there should be absolutely zero secrets in the repos (and the ones that are now should be changed before stored somewhere else) and the nomad workers should authenticate to pull images

500
flake.lock generated
View File

@@ -1,5 +1,43 @@
{
"nodes": {
"base16-schemes": {
"flake": false,
"locked": {
"lastModified": 1696158499,
"narHash": "sha256-5yIHgDTPjoX/3oDEfLSQ0eJZdFL1SaCfb9d6M0RmOTM=",
"owner": "tinted-theming",
"repo": "base16-schemes",
"rev": "a9112eaae86d9dd8ee6bb9445b664fba2f94037a",
"type": "github"
},
"original": {
"owner": "tinted-theming",
"repo": "base16-schemes",
"type": "github"
}
},
"browser-previews": {
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs": [
"nixpkgs-unstable"
],
"systems": "systems"
},
"locked": {
"lastModified": 1768591869,
"narHash": "sha256-Tph/rfG5Oebz1VQJiJXHMQEFpzXYV98d5fRkmepQK3Y=",
"owner": "nix-community",
"repo": "browser-previews",
"rev": "46e6d0179b14a83401120a4481d170582e8f35f4",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "browser-previews",
"type": "github"
}
},
"deploy-rs": {
"inputs": {
"flake-compat": "flake-compat",
@@ -9,11 +47,11 @@
"utils": "utils"
},
"locked": {
"lastModified": 1727447169,
"narHash": "sha256-3KyjMPUKHkiWhwR91J1YchF6zb6gvckCAY1jOE+ne0U=",
"lastModified": 1766051518,
"narHash": "sha256-znKOwPXQnt3o7lDb3hdf19oDo0BLP4MfBOYiWkEHoik=",
"owner": "serokell",
"repo": "deploy-rs",
"rev": "aa07eb05537d4cd025e2310397a6adcedfe72c76",
"rev": "d5eff7f948535b9c723d60cd8239f8f11ddc90fa",
"type": "github"
},
"original": {
@@ -25,16 +63,16 @@
"devshell": {
"inputs": {
"nixpkgs": [
"nixvim",
"ethereum-nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1728330715,
"narHash": "sha256-xRJ2nPOXb//u1jaBnDP56M7v5ldavjbtR6lfGqSvcKg=",
"lastModified": 1764011051,
"narHash": "sha256-M7SZyPZiqZUR/EiiBJnmyUbOi5oE/03tCeFrTiUZchI=",
"owner": "numtide",
"repo": "devshell",
"rev": "dd6b80932022cea34a019e2bb32f6fa9e494dfef",
"rev": "17ed8d9744ebe70424659b0ef74ad6d41fc87071",
"type": "github"
},
"original": {
@@ -50,11 +88,11 @@
]
},
"locked": {
"lastModified": 1730190761,
"narHash": "sha256-o5m5WzvY6cGIDupuOvjgNSS8AN6yP2iI9MtUC6q/uos=",
"lastModified": 1768923567,
"narHash": "sha256-GVJ0jKsyXLuBzRMXCDY6D5J8wVdwP1DuQmmvYL/Vw/Q=",
"owner": "nix-community",
"repo": "disko",
"rev": "3979285062d6781525cded0f6c4ff92e71376b55",
"rev": "00395d188e3594a1507f214a2f15d4ce5c07cb28",
"type": "github"
},
"original": {
@@ -63,14 +101,41 @@
"type": "github"
}
},
"ethereum-nix": {
"inputs": {
"devshell": "devshell",
"flake-parts": "flake-parts",
"flake-utils": "flake-utils_2",
"foundry-nix": "foundry-nix",
"nixpkgs": [
"nixpkgs-unstable"
],
"nixpkgs-unstable": "nixpkgs-unstable",
"systems": "systems_3",
"treefmt-nix": "treefmt-nix"
},
"locked": {
"lastModified": 1768914064,
"narHash": "sha256-MbRHoA4AWpDYebuGAAWtw/3UB11TZK195m1Vh7IIMkg=",
"owner": "nix-community",
"repo": "ethereum.nix",
"rev": "f254a491fa97bc45d7ccb36547314f6cf57f7b70",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "ethereum.nix",
"type": "github"
}
},
"flake-compat": {
"flake": false,
"locked": {
"lastModified": 1696426674,
"narHash": "sha256-kvjfFW7WAETZlt09AgDn1MrtKzP7t90Vf7vypd3OL1U=",
"lastModified": 1733328505,
"narHash": "sha256-NeCCThCEP3eCl2l/+27kNNK7QrwZB1IJCrXfrbv5oqU=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "0f9255e01c2351cc7d116c072cb317785dd33b33",
"rev": "ff81ac966bb2cae68946d5ed5fc4994f96d0ffec",
"type": "github"
},
"original": {
@@ -79,21 +144,25 @@
"type": "github"
}
},
"flake-compat_2": {
"locked": {
"lastModified": 1696426674,
"narHash": "sha256-kvjfFW7WAETZlt09AgDn1MrtKzP7t90Vf7vypd3OL1U=",
"rev": "0f9255e01c2351cc7d116c072cb317785dd33b33",
"revCount": 57,
"type": "tarball",
"url": "https://api.flakehub.com/f/pinned/edolstra/flake-compat/1.0.1/018afb31-abd1-7bff-a5e4-cff7e18efb7a/source.tar.gz"
},
"original": {
"type": "tarball",
"url": "https://flakehub.com/f/edolstra/flake-compat/1.tar.gz"
}
},
"flake-parts": {
"inputs": {
"nixpkgs-lib": "nixpkgs-lib"
},
"locked": {
"lastModified": 1768135262,
"narHash": "sha256-PVvu7OqHBGWN16zSi6tEmPwwHQ4rLPU9Plvs8/1TUBY=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "80daad04eddbbf5a4d883996a73f3f542fa437ac",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-parts_2": {
"inputs": {
"nixpkgs-lib": [
"nixvim",
@@ -101,11 +170,11 @@
]
},
"locked": {
"lastModified": 1727826117,
"narHash": "sha256-K5ZLCyfO/Zj9mPFldf3iwS6oZStJcU4tSpiXTMYaaL0=",
"lastModified": 1765835352,
"narHash": "sha256-XswHlK/Qtjasvhd1nOa1e8MgZ8GS//jBoTqWtrS1Giw=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "3d04084d54bedc3d6b8b736c70ef449225c361b1",
"rev": "a34fae9c08a15ad73f295041fec82323541400a9",
"type": "github"
},
"original": {
@@ -116,14 +185,17 @@
},
"flake-utils": {
"inputs": {
"systems": "systems_2"
"systems": [
"browser-previews",
"systems"
]
},
"locked": {
"lastModified": 1726560853,
"narHash": "sha256-X6rJYSESBVr3hBoH0WbKE5KvhPU5bloyZ2L4K60/fPQ=",
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "c1dfcf08411b08f6b8615f7d8971a2bfa81d5e8a",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
@@ -132,55 +204,50 @@
"type": "github"
}
},
"git-hooks": {
"flake-utils_2": {
"inputs": {
"flake-compat": [
"nixvim",
"flake-compat"
],
"gitignore": "gitignore",
"nixpkgs": [
"nixvim",
"nixpkgs"
],
"nixpkgs-stable": [
"nixvim",
"nixpkgs"
"systems": [
"ethereum-nix",
"systems"
]
},
"locked": {
"lastModified": 1729104314,
"narHash": "sha256-pZRZsq5oCdJt3upZIU4aslS9XwFJ+/nVtALHIciX/BI=",
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "3c3e88f0f544d6bb54329832616af7eb971b6be6",
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "git-hooks.nix",
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"gitignore": {
"foundry-nix": {
"inputs": {
"flake-utils": [
"ethereum-nix",
"flake-utils"
],
"nixpkgs": [
"nixvim",
"git-hooks",
"ethereum-nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1709087332,
"narHash": "sha256-HG2cCnktfHsKV0s4XW83gU3F57gaTljL9KNSuG6bnQs=",
"owner": "hercules-ci",
"repo": "gitignore.nix",
"rev": "637db329424fd7e46cf4185293b9cc8c88c95394",
"lastModified": 1767517855,
"narHash": "sha256-LnZosb07bahYAyFw07JFzSXslx9j1dCe+npWDZdPFZg=",
"owner": "shazow",
"repo": "foundry.nix",
"rev": "ee376e8a93f537c2865dda9811e748e4567a7aaf",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "gitignore.nix",
"owner": "shazow",
"ref": "monthly",
"repo": "foundry.nix",
"type": "github"
}
},
@@ -191,27 +258,52 @@
]
},
"locked": {
"lastModified": 1726989464,
"narHash": "sha256-Vl+WVTJwutXkimwGprnEtXc/s/s8sMuXzqXaspIGlwM=",
"lastModified": 1768603898,
"narHash": "sha256-vRV1dWJOCpCal3PRr86wE2WTOMfAhTu6G7bSvOsryUo=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "2f23fa308a7c067e52dfcc30a0758f47043ec176",
"rev": "2a63d0e9d2c72ac4d4150ebb242cf8d86f488c8c",
"type": "github"
},
"original": {
"owner": "nix-community",
"ref": "release-25.11",
"repo": "home-manager",
"type": "github"
}
},
"home-manager_2": {
"inputs": {
"nixpkgs": [
"impermanence",
"nixpkgs"
]
},
"locked": {
"lastModified": 1768598210,
"narHash": "sha256-kkgA32s/f4jaa4UG+2f8C225Qvclxnqs76mf8zvTVPg=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "c47b2cc64a629f8e075de52e4742de688f930dc6",
"type": "github"
},
"original": {
"owner": "nix-community",
"ref": "release-24.05",
"repo": "home-manager",
"type": "github"
}
},
"impermanence": {
"inputs": {
"home-manager": "home-manager_2",
"nixpkgs": "nixpkgs"
},
"locked": {
"lastModified": 1729068498,
"narHash": "sha256-C2sGRJl1EmBq0nO98TNd4cbUy20ABSgnHWXLIJQWRFA=",
"lastModified": 1768835187,
"narHash": "sha256-6nY0ixjGjPQCL+/sUC1B1MRiO1LOI3AkRSIywm3i3bE=",
"owner": "nix-community",
"repo": "impermanence",
"rev": "e337457502571b23e449bf42153d7faa10c0a562",
"rev": "0d633a69480bb3a3e2f18c080d34a8fa81da6395",
"type": "github"
},
"original": {
@@ -220,52 +312,22 @@
"type": "github"
}
},
"ixx": {
"nix-colors": {
"inputs": {
"flake-utils": [
"nixvim",
"nuschtosSearch",
"flake-utils"
],
"nixpkgs": [
"nixvim",
"nuschtosSearch",
"nixpkgs"
]
"base16-schemes": "base16-schemes",
"nixpkgs-lib": "nixpkgs-lib_2"
},
"locked": {
"lastModified": 1729544999,
"narHash": "sha256-YcyJLvTmN6uLEBGCvYoMLwsinblXMkoYkNLEO4WnKus=",
"owner": "NuschtOS",
"repo": "ixx",
"rev": "65c207c92befec93e22086da9456d3906a4e999c",
"lastModified": 1707825078,
"narHash": "sha256-hTfge2J2W+42SZ7VHXkf4kjU+qzFqPeC9k66jAUBMHk=",
"owner": "misterio77",
"repo": "nix-colors",
"rev": "b01f024090d2c4fc3152cd0cf12027a7b8453ba1",
"type": "github"
},
"original": {
"owner": "NuschtOS",
"ref": "v0.0.5",
"repo": "ixx",
"type": "github"
}
},
"nix-darwin": {
"inputs": {
"nixpkgs": [
"nixvim",
"nixpkgs"
]
},
"locked": {
"lastModified": 1729982130,
"narHash": "sha256-HmLLQbX07rYD0RXPxbf3kJtUo66XvEIX9Y+N5QHQ9aY=",
"owner": "lnl7",
"repo": "nix-darwin",
"rev": "2eb472230a5400c81d9008014888b4bff23bcf44",
"type": "github"
},
"original": {
"owner": "lnl7",
"repo": "nix-darwin",
"owner": "misterio77",
"repo": "nix-colors",
"type": "github"
}
},
@@ -276,11 +338,11 @@
]
},
"locked": {
"lastModified": 1729999765,
"narHash": "sha256-LYsavZXitFjjyETZoij8usXjTa7fa9AIF3Sk3MJSX+Y=",
"lastModified": 1765267181,
"narHash": "sha256-d3NBA9zEtBu2JFMnTBqWj7Tmi7R5OikoU2ycrdhQEws=",
"owner": "nix-community",
"repo": "nix-index-database",
"rev": "0e3a8778c2ee218eff8de6aacf3d2fa6c33b2d4f",
"rev": "82befcf7dc77c909b0f2a09f5da910ec95c5b78f",
"type": "github"
},
"original": {
@@ -289,29 +351,91 @@
"type": "github"
}
},
"nixos-hardware": {
"locked": {
"lastModified": 1768736227,
"narHash": "sha256-qgGq7CfrYKc3IBYQ7qp0Z/ZXndQVC5Bj0N8HW9mS2rM=",
"owner": "NixOS",
"repo": "nixos-hardware",
"rev": "d447553bcbc6a178618d37e61648b19e744370df",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "master",
"repo": "nixos-hardware",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1729973466,
"narHash": "sha256-knnVBGfTCZlQgxY1SgH0vn2OyehH9ykfF8geZgS95bk=",
"owner": "NixOS",
"lastModified": 1768564909,
"narHash": "sha256-Kell/SpJYVkHWMvnhqJz/8DqQg2b6PguxVWOuadbHCc=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "cd3e8833d70618c4eea8df06f95b364b016d4950",
"rev": "e4bae1bd10c9c57b2cf517953ab70060a828ee6f",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-24.05",
"owner": "nixos",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-lib": {
"locked": {
"lastModified": 1765674936,
"narHash": "sha256-k00uTP4JNfmejrCLJOwdObYC9jHRrr/5M/a/8L2EIdo=",
"owner": "nix-community",
"repo": "nixpkgs.lib",
"rev": "2075416fcb47225d9b68ac469a5c4801a9c4dd85",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nixpkgs.lib",
"type": "github"
}
},
"nixpkgs-lib_2": {
"locked": {
"lastModified": 1697935651,
"narHash": "sha256-qOfWjQ2JQSQL15KLh6D7xQhx0qgZlYZTYlcEiRuAMMw=",
"owner": "nix-community",
"repo": "nixpkgs.lib",
"rev": "e1e11fdbb01113d85c7f41cada9d2847660e3902",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nixpkgs.lib",
"type": "github"
}
},
"nixpkgs-unstable": {
"locked": {
"lastModified": 1729880355,
"narHash": "sha256-RP+OQ6koQQLX5nw0NmcDrzvGL8HDLnyXt/jHhL1jwjM=",
"lastModified": 1768456270,
"narHash": "sha256-NgaL2CCiUR6nsqUIY4yxkzz07iQUlUCany44CFv+OxY=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "f4606b01b39e09065df37905a2133905246db9ed",
"type": "github"
},
"original": {
"owner": "nixos",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-unstable_2": {
"locked": {
"lastModified": 1768564909,
"narHash": "sha256-Kell/SpJYVkHWMvnhqJz/8DqQg2b6PguxVWOuadbHCc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "18536bf04cd71abd345f9579158841376fdd0c5a",
"rev": "e4bae1bd10c9c57b2cf517953ab70060a828ee6f",
"type": "github"
},
"original": {
@@ -321,68 +445,57 @@
"type": "github"
}
},
"nixvim": {
"inputs": {
"devshell": "devshell",
"flake-compat": "flake-compat_2",
"flake-parts": "flake-parts",
"git-hooks": "git-hooks",
"home-manager": [
"home-manager"
],
"nix-darwin": "nix-darwin",
"nixpkgs": [
"nixpkgs-unstable"
],
"nuschtosSearch": "nuschtosSearch",
"treefmt-nix": "treefmt-nix"
},
"nixpkgs_2": {
"locked": {
"lastModified": 1730150629,
"narHash": "sha256-5afcjZhCy5EcCdNGKTPoUdywm2yppTSf7GwX/2Rq6Ig=",
"owner": "nix-community",
"repo": "nixvim",
"rev": "a4c3ad01cd0755dd1e93473d74efdd89a1cf5999",
"lastModified": 1768773494,
"narHash": "sha256-XsM7GP3jHlephymxhDE+/TKKO1Q16phz/vQiLBGhpF4=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "77ef7a29d276c6d8303aece3444d61118ef71ac2",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nixvim",
"owner": "NixOS",
"ref": "nixos-25.11",
"repo": "nixpkgs",
"type": "github"
}
},
"nuschtosSearch": {
"nixvim": {
"inputs": {
"flake-utils": "flake-utils",
"ixx": "ixx",
"flake-parts": "flake-parts_2",
"nixpkgs": [
"nixvim",
"nixpkgs"
]
"nixpkgs-unstable"
],
"systems": "systems_4"
},
"locked": {
"lastModified": 1730044642,
"narHash": "sha256-DbyV9l3hkrSWcN34S6d9M4kAFss0gEHGtjqqMdG9eAs=",
"owner": "NuschtOS",
"repo": "search",
"rev": "e373332c1f8237fc1263901745b0fe747228c8ba",
"lastModified": 1768910181,
"narHash": "sha256-YRU0IHMzXluZxr0JDfq9jtblb4DV7MIB5wj2jYMFKQc=",
"owner": "nix-community",
"repo": "nixvim",
"rev": "5b138edcb2f1c3ed4b29eca3658f04f0639b98b3",
"type": "github"
},
"original": {
"owner": "NuschtOS",
"repo": "search",
"owner": "nix-community",
"repo": "nixvim",
"type": "github"
}
},
"root": {
"inputs": {
"browser-previews": "browser-previews",
"deploy-rs": "deploy-rs",
"disko": "disko",
"ethereum-nix": "ethereum-nix",
"home-manager": "home-manager",
"impermanence": "impermanence",
"nix-colors": "nix-colors",
"nix-index-database": "nix-index-database",
"nixpkgs": "nixpkgs",
"nixpkgs-unstable": "nixpkgs-unstable",
"nixos-hardware": "nixos-hardware",
"nixpkgs": "nixpkgs_2",
"nixpkgs-unstable": "nixpkgs-unstable_2",
"nixvim": "nixvim",
"sops-nix": "sops-nix"
}
@@ -391,17 +504,14 @@
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"nixpkgs-stable": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1729999681,
"narHash": "sha256-qm0uCtM9bg97LeJTKQ8dqV/FvqRN+ompyW4GIJruLuw=",
"lastModified": 1768863606,
"narHash": "sha256-1IHAeS8WtBiEo5XiyJBHOXMzECD6aaIOJmpQKzRRl64=",
"owner": "Mic92",
"repo": "sops-nix",
"rev": "1666d16426abe79af5c47b7c0efa82fd31bf4c56",
"rev": "c7067be8db2c09ab1884de67ef6c4f693973f4a2",
"type": "github"
},
"original": {
@@ -412,16 +522,16 @@
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"lastModified": 1680978846,
"narHash": "sha256-Gtqg8b/v49BFDpDetjclCYXm8mAnTrUzR0JnE2nv5aw=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"repo": "x86_64-linux",
"rev": "2ecfcac5e15790ba6ce360ceccddb15ad16d08a8",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"repo": "x86_64-linux",
"type": "github"
}
},
@@ -440,19 +550,49 @@
"type": "github"
}
},
"systems_3": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_4": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"treefmt-nix": {
"inputs": {
"nixpkgs": [
"nixvim",
"ethereum-nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1730025913,
"narHash": "sha256-Y9NtFmP8ciLyRsopcCx1tyoaaStKeq+EndwtGCgww7I=",
"lastModified": 1768158989,
"narHash": "sha256-67vyT1+xClLldnumAzCTBvU0jLZ1YBcf4vANRWP3+Ak=",
"owner": "numtide",
"repo": "treefmt-nix",
"rev": "bae131e525cc8718da22fbeb8d8c7c43c4ea502a",
"rev": "e96d59dff5c0d7fddb9d113ba108f03c3ef99eca",
"type": "github"
},
"original": {
@@ -463,14 +603,14 @@
},
"utils": {
"inputs": {
"systems": "systems"
"systems": "systems_2"
},
"locked": {
"lastModified": 1701680307,
"narHash": "sha256-kAuep2h5ajznlPMD9rnQyffWG8EM/C73lejGofXvdM8=",
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "4022d587cbbfd70fe950c1e2083a02621806a725",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {

152
flake.nix
View File

@@ -5,12 +5,16 @@
deploy-rs.url = "github:serokell/deploy-rs";
deploy-rs.inputs.nixpkgs.follows = "nixpkgs";
impermanence.url = "github:nix-community/impermanence";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.05";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.11";
nixpkgs-unstable.url = "github:NixOS/nixpkgs/nixos-unstable";
disko.url = "github:nix-community/disko";
disko.inputs.nixpkgs.follows = "nixpkgs";
ethereum-nix = {
url = "github:nix-community/ethereum.nix";
inputs.nixpkgs.follows = "nixpkgs-unstable";
};
home-manager = {
url = "github:nix-community/home-manager/release-24.05";
url = "github:nix-community/home-manager/release-25.11";
inputs.nixpkgs.follows = "nixpkgs";
};
nix-index-database = {
@@ -20,13 +24,17 @@
nixvim = {
url = "github:nix-community/nixvim";
inputs.nixpkgs.follows = "nixpkgs-unstable";
inputs.home-manager.follows = "home-manager";
};
sops-nix = {
url = "github:Mic92/sops-nix";
inputs.nixpkgs.follows = "nixpkgs";
inputs.nixpkgs-stable.follows = "nixpkgs";
};
browser-previews = {
url = "github:nix-community/browser-previews";
inputs.nixpkgs.follows = "nixpkgs-unstable";
};
nix-colors.url = "github:misterio77/nix-colors";
nixos-hardware.url = "github:NixOS/nixos-hardware/master";
};
outputs =
@@ -36,55 +44,77 @@
nixpkgs-unstable,
deploy-rs,
disko,
ethereum-nix,
home-manager,
sops-nix,
impermanence,
sops-nix,
browser-previews,
nix-colors,
nixos-hardware,
...
}@inputs:
let
inherit (self);
overlay-unstable = final: prev: { unstable = nixpkgs-unstable.legacyPackages.${prev.system}; };
overlay-unstable = final: prev: {
unstable = import nixpkgs-unstable {
system = prev.stdenv.hostPlatform.system;
config.allowUnfree = true;
};
};
mkNixos =
system: modules:
overlay-browser-previews = final: prev: {
browser-previews = browser-previews.packages.${prev.stdenv.hostPlatform.system};
};
mkHost =
system: profile: modules:
let
# Profile parameter is only used by home-manager for user environment
# NixOS system configuration is handled via explicit imports in host configs
in
nixpkgs.lib.nixosSystem {
system = system;
modules = [
(
{ config, pkgs, ... }:
{
nixpkgs.overlays = [ overlay-unstable ];
nixpkgs.overlays = [ overlay-unstable overlay-browser-previews ];
nixpkgs.config.allowUnfree = true;
}
)
disko.nixosModules.disko
sops-nix.nixosModules.sops
impermanence.nixosModules.impermanence
] ++ modules;
specialArgs = {
inherit inputs self;
};
};
mkHMNixos =
system: modules:
mkNixos system ([
home-manager.nixosModules.home-manager
(
{ lib, ... }:
{
home-manager = {
useGlobalPkgs = true;
useUserPackages = true;
users.ppetru = {
imports = [
(inputs.impermanence + "/home-manager.nix")
inputs.nix-index-database.hmModules.nix-index
inputs.nixvim.homeManagerModules.nixvim
inputs.nix-index-database.homeModules.nix-index
inputs.nixvim.homeModules.nixvim
./home
] ++ lib.optionals (profile == "desktop") [
nix-colors.homeManagerModules.default
];
};
extraSpecialArgs = {
inherit profile nix-colors;
};
};
}
)
] ++ nixpkgs.lib.optionals (profile == "desktop") [
./common/desktop
] ++ modules;
specialArgs = {
inherit inputs self;
};
};
}] ++ modules);
pkgsFor =
system:
@@ -99,7 +129,7 @@
inherit system;
overlays = [
overlay-unstable
deploy-rs.overlay
deploy-rs.overlays.default
(self: super: {
deploy-rs = {
inherit (pkgsFor system) deploy-rs;
@@ -111,13 +141,18 @@
in
{
nixosConfigurations = {
c1 = mkHMNixos "x86_64-linux" [ ./hosts/c1 ];
c2 = mkHMNixos "x86_64-linux" [ ./hosts/c2 ];
c3 = mkHMNixos "x86_64-linux" [ ./hosts/c3 ];
alo-cloud-1 = mkHMNixos "aarch64-linux" [ ./hosts/alo-cloud-1 ];
zippy = mkHMNixos "x86_64-linux" [ ./hosts/zippy ];
chilly = mkHMNixos "x86_64-linux" [ ./hosts/chilly ];
kopia = mkNixos "x86_64-linux" [ ./hosts/kopia ];
c1 = mkHost "x86_64-linux" "minimal" [ ./hosts/c1 ];
c2 = mkHost "x86_64-linux" "minimal" [ ./hosts/c2 ];
c3 = mkHost "x86_64-linux" "minimal" [ ./hosts/c3 ];
alo-cloud-1 = mkHost "aarch64-linux" "cloud" [ ./hosts/alo-cloud-1 ];
zippy = mkHost "x86_64-linux" "minimal" [ ./hosts/zippy ];
chilly = mkHost "x86_64-linux" "workstation" [ ./hosts/chilly ];
sparky = mkHost "x86_64-linux" "minimal" [ ./hosts/sparky ];
beefy = mkHost "x86_64-linux" "desktop" [ ./hosts/beefy ];
stinky = mkHost "aarch64-linux" "minimal" [
nixos-hardware.nixosModules.raspberry-pi-4
./hosts/stinky
];
};
deploy = {
@@ -144,7 +179,8 @@
};
};
alo-cloud-1 = {
hostname = "49.13.163.72";
hostname = "alo-cloud-1";
#hostname = "49.13.163.72";
profiles = {
system = {
user = "root";
@@ -170,8 +206,62 @@
};
};
};
sparky = {
hostname = "sparky";
profiles = {
system = {
user = "root";
path = (deployPkgsFor "x86_64-linux").deploy-rs.lib.activate.nixos self.nixosConfigurations.sparky;
};
};
};
beefy = {
hostname = "beefy";
profiles = {
system = {
user = "root";
path = (deployPkgsFor "x86_64-linux").deploy-rs.lib.activate.nixos self.nixosConfigurations.beefy;
};
};
};
stinky = {
hostname = "stinky";
profiles = {
system = {
user = "root";
path = (deployPkgsFor "aarch64-linux").deploy-rs.lib.activate.nixos self.nixosConfigurations.stinky;
};
};
};
};
};
# SD card image for stinky (Raspberry Pi 4)
packages.aarch64-linux.stinky-sdImage = self.nixosConfigurations.stinky.config.system.build.sdImage;
# Apps - utility scripts
apps.x86_64-linux.diff-configs = {
type = "app";
program = "${(pkgsFor "x86_64-linux").writeShellScriptBin "diff-configs" (builtins.readFile ./scripts/diff-configs.sh)}/bin/diff-configs";
};
apps.aarch64-linux.diff-configs = {
type = "app";
program = "${(pkgsFor "aarch64-linux").writeShellScriptBin "diff-configs" (builtins.readFile ./scripts/diff-configs.sh)}/bin/diff-configs";
};
# Development shells
devShells.x86_64-linux.default = (pkgsFor "x86_64-linux").mkShell {
packages = with (pkgsFor "x86_64-linux"); [
nvd
];
};
devShells.aarch64-linux.default = (pkgsFor "aarch64-linux").mkShell {
packages = with (pkgsFor "aarch64-linux"); [
nvd
];
};
checks = builtins.mapAttrs (system: deployLib: deployLib.deployChecks self.deploy) deploy-rs.lib;

View File

@@ -1,7 +1,17 @@
{ pkgs, ... }:
{ pkgs, lib, profile ? "cli", ... }:
let
# Handle both file and directory imports for profiles
# desktop is a directory, others are files
profilePath =
if builtins.pathExists ./programs/${profile}/default.nix
then ./programs/${profile}
else ./programs/${profile}.nix;
in
{
imports = [ profilePath ];
home = {
packages = (import ./packages.nix { inherit pkgs; }).packages;
packages = (import ./packages.nix { inherit pkgs profile; }).packages;
stateVersion = "24.05"; # TODO: unify this with the references in flake.nix:inputs
sessionVariables = {
@@ -10,31 +20,27 @@
MOSH_SERVER_NETWORK_TMOUT = 604800;
NOMAD_ADDR = "http://nomad.service.consul:4646";
LESS = "-F -i -M -+S -R -w -X -z-4";
SYSTEMD_LESS = "FiM+SRwXz-4";
SYSTEMD_LESS = "FiM+SRwX";
NIX_LD = "${pkgs.glibc}/lib/ld-linux-x86-64.so.2";
NIX_LD_LIBRARY_PATH = pkgs.lib.makeLibraryPath [
pkgs.stdenv.cc.cc
];
GEMINI_API_KEY = "AIzaSyBZkifYOFNKCjROLa_GZyzQbB2EbEYIby4";
LLM_GEMINI_KEY = "AIzaSyBZkifYOFNKCjROLa_GZyzQbB2EbEYIby4";
PLAYWRIGHT_BROWSERS_PATH = "${pkgs.unstable.playwright-driver.browsers}";
NIXOS_OZONE_WL = "1";
};
shellAliases = {
reload-home-manager-config = "home-manager switch --flake ${builtins.toString ./.}";
};
persistence."/persist/home/ppetru" = {
directories = [
".cache/nix"
".cache/nix-index"
".config/sops/"
".docker/"
".local/share/fish"
".ssh"
"projects"
];
files = [ ];
allowOther = true;
file.".ssh/rc".text = ''
#!/bin/sh
if test "$SSH_AUTH_SOCK"; then
ln -sf "$SSH_AUTH_SOCK" "$HOME/.ssh/ssh_auth_sock"
fi
'';
file.".ssh/rc".executable = true;
};
};
programs = import ./programs.nix { inherit pkgs; };
}

View File

@@ -1,21 +1,7 @@
{ pkgs }:
{ pkgs, profile ? "workstation" }:
let
profilePackages = import ./profiles/${profile}.nix { inherit pkgs; };
in
{
packages =
with pkgs;
[
direnv
fzf
git
home-manager
mosh
ripgrep
tmux
zsh
]
++ (with pkgs.fishPlugins; [
pure
# don't add failed commands to history
sponge
transient-fish
]);
packages = profilePackages.packages;
}

22
home/profiles/cloud.nix Normal file
View File

@@ -0,0 +1,22 @@
{ pkgs }:
let
corePkgs = with pkgs; [
direnv
fzf
git
mosh
ripgrep
tmux
zsh
];
fishPkgs = with pkgs.fishPlugins; [
pure
# don't add failed commands to history
sponge
transient-fish
];
in
{
packages = corePkgs ++ fishPkgs;
}

31
home/profiles/desktop.nix Normal file
View File

@@ -0,0 +1,31 @@
# ABOUTME: Desktop profile package list
# ABOUTME: Extends workstation with GUI and Wayland tools
{ pkgs }:
let
workstationProfile = import ./workstation.nix { inherit pkgs; };
# Hyprland ecosystem packages
hyprlandPkgs = with pkgs; [
hyprshot
hyprpicker
hyprsunset
brightnessctl
pamixer
playerctl
gnome-themes-extra
pavucontrol
wl-clip-persist
clipse
];
# Desktop GUI applications
desktopPkgs = with pkgs; [
browser-previews.google-chrome
nautilus
blueberry
libnotify
];
in
{
packages = workstationProfile.packages ++ hyprlandPkgs ++ desktopPkgs;
}

View File

@@ -0,0 +1,5 @@
{ pkgs }:
{
# Minimal profile: reuses server.nix for basic package list
packages = (import ./server.nix { inherit pkgs; }).packages;
}

22
home/profiles/server.nix Normal file
View File

@@ -0,0 +1,22 @@
{ pkgs }:
let
corePkgs = with pkgs; [
direnv
fzf
git
mosh
ripgrep
tmux
zsh
];
fishPkgs = with pkgs.fishPlugins; [
pure
# don't add failed commands to history
# sponge
transient-fish
];
in
{
packages = corePkgs ++ fishPkgs;
}

View File

@@ -0,0 +1,23 @@
{ pkgs }:
let
serverProfile = import ./server.nix { inherit pkgs; };
cliPkgs = with pkgs; [
ast-grep
yq
unstable.beads
unstable.claude-code
unstable.codex
unstable.gemini-cli
];
pythonEnv = pkgs.unstable.python3.withPackages (ps: [
ps.google-generativeai
ps.ipython
ps.llm
ps.llm-gemini
]);
in
{
packages = serverProfile.packages ++ cliPkgs ++ [ pythonEnv ];
}

8
home/programs/cloud.nix Normal file
View File

@@ -0,0 +1,8 @@
{ pkgs, ... }:
{
imports = [ ./server.nix ];
# Cloud-specific home-manager programs
# Currently uses server profile's minimal CLI setup
# Add cloud-specific customizations here if needed in the future
}

View File

@@ -0,0 +1,104 @@
# ABOUTME: Btop system monitor configuration with nix-colors theming
# ABOUTME: Creates a custom theme file and configures btop settings
{ config, pkgs, ... }:
let
cfg = import ./config.nix;
palette = config.colorScheme.palette;
in
{
home.file.".config/btop/themes/${cfg.theme}.theme".text = ''
# Main text color
theme[main_fg]="${palette.base05}"
# Title color for boxes
theme[title]="${palette.base05}"
# Highlight color for keyboard shortcuts
theme[hi_fg]="${palette.base0D}"
# Background color of selected item in processes box
theme[selected_bg]="${palette.base01}"
# Foreground color of selected item in processes box
theme[selected_fg]="${palette.base05}"
# Color of inactive/disabled text
theme[inactive_fg]="${palette.base04}"
# Misc colors for processes box
theme[proc_misc]="${palette.base0D}"
# Box outline colors
theme[cpu_box]="${palette.base0B}"
theme[mem_box]="${palette.base09}"
theme[net_box]="${palette.base0E}"
theme[proc_box]="${palette.base0C}"
# Box divider line
theme[div_line]="${palette.base04}"
# Temperature graph colors
theme[temp_start]="${palette.base0B}"
theme[temp_mid]="${palette.base0A}"
theme[temp_end]="${palette.base08}"
# CPU graph colors
theme[cpu_start]="${palette.base0B}"
theme[cpu_mid]="${palette.base0A}"
theme[cpu_end]="${palette.base08}"
# Mem/Disk meters
theme[free_start]="${palette.base0B}"
theme[cached_start]="${palette.base0A}"
theme[available_start]="${palette.base09}"
theme[used_start]="${palette.base08}"
# Network graph colors
theme[download_start]="${palette.base0E}"
theme[download_mid]="${palette.base0D}"
theme[download_end]="${palette.base0C}"
theme[upload_start]="${palette.base0E}"
theme[upload_mid]="${palette.base0D}"
theme[upload_end]="${palette.base0C}"
'';
programs.btop = {
enable = true;
settings = {
color_theme = cfg.theme;
theme_background = false;
truecolor = true;
force_tty = false;
vim_keys = true;
rounded_corners = true;
graph_symbol = "braille";
shown_boxes = "cpu mem net proc";
update_ms = 2000;
proc_sorting = "cpu lazy";
proc_colors = true;
proc_gradient = false;
proc_per_core = false;
proc_mem_bytes = true;
proc_cpu_graphs = true;
show_uptime = true;
check_temp = true;
show_coretemp = true;
temp_scale = "celsius";
show_cpu_freq = true;
clock_format = "%X";
background_update = true;
mem_graphs = true;
show_swap = true;
swap_disk = true;
show_disks = true;
only_physical = true;
use_fstab = true;
show_io_stat = true;
net_auto = true;
net_sync = true;
show_battery = true;
log_level = "WARNING";
};
};
}

View File

@@ -0,0 +1,21 @@
# ABOUTME: Shared configuration values for desktop environment
# ABOUTME: Centralizes user info, theme, fonts, and display settings
{
user = {
fullName = "Petru Paler";
email = "petru@paler.net";
};
theme = "tokyo-night";
base16Theme = "tokyo-night-dark";
primaryFont = "Liberation Sans 11";
monoFont = "CaskaydiaMono Nerd Font";
scale = 1.5;
monitors = [ "DP-1,preferred,auto,1.5" ];
# Wallpaper for tokyo-night theme
wallpaper = "1-Pawel-Czerwinski-Abstract-Purple-Blue.jpg";
}

View File

@@ -0,0 +1,59 @@
# ABOUTME: Desktop environment home-manager configuration
# ABOUTME: Imports all desktop modules and sets up nix-colors theming
{ config, pkgs, lib, nix-colors, ... }:
let
cfg = import ./config.nix;
in
{
imports = [
../workstation.nix
./ghostty.nix
./hyprland
./waybar.nix
./wofi.nix
./mako.nix
./hyprpaper.nix
./hypridle.nix
./hyprlock.nix
./starship.nix
./vscode.nix
./btop.nix
./git.nix
];
# Set up nix-colors with our theme
colorScheme = nix-colors.colorSchemes.${cfg.base16Theme};
# Override ghostty to use unstable version (1.2.0+) for ssh-terminfo support
programs.ghostty.package = pkgs.unstable.ghostty;
# Extend ghostty configuration
programs.ghostty.settings = {
shell-integration-features = "ssh-terminfo";
};
# GTK theme (dark for tokyo-night)
gtk = {
enable = true;
theme = {
name = "Adwaita-dark";
package = pkgs.gnome-themes-extra;
};
};
# Enable neovim (placeholder for future config)
programs.neovim.enable = true;
# direnv
programs.direnv = {
enable = true;
nix-direnv.enable = true;
};
# zoxide (directory jumping)
programs.zoxide = {
enable = true;
enableBashIntegration = true;
};
}

View File

@@ -0,0 +1,60 @@
# ABOUTME: Ghostty terminal emulator configuration with nix-colors theming
# ABOUTME: Creates a custom color theme from the nix-colors palette
{ config, pkgs, ... }:
let
cfg = import ./config.nix;
palette = config.colorScheme.palette;
in
{
programs.ghostty = {
enable = true;
settings = {
window-padding-x = 14;
window-padding-y = 14;
background-opacity = 0.95;
window-decoration = "none";
font-family = cfg.monoFont;
font-size = 12;
theme = "desktop-theme";
keybind = [
"ctrl+k=reset"
];
};
themes = {
desktop-theme = {
background = "#${palette.base00}";
foreground = "#${palette.base05}";
selection-background = "#${palette.base02}";
selection-foreground = "#${palette.base00}";
palette = [
"0=#${palette.base00}"
"1=#${palette.base08}"
"2=#${palette.base0B}"
"3=#${palette.base0A}"
"4=#${palette.base0D}"
"5=#${palette.base0E}"
"6=#${palette.base0C}"
"7=#${palette.base05}"
"8=#${palette.base03}"
"9=#${palette.base08}"
"10=#${palette.base0B}"
"11=#${palette.base0A}"
"12=#${palette.base0D}"
"13=#${palette.base0E}"
"14=#${palette.base0C}"
"15=#${palette.base07}"
"16=#${palette.base09}"
"17=#${palette.base0F}"
"18=#${palette.base01}"
"19=#${palette.base02}"
"20=#${palette.base04}"
"21=#${palette.base06}"
];
};
};
};
}

View File

@@ -0,0 +1,24 @@
# ABOUTME: Git and GitHub CLI configuration
# ABOUTME: Sets up git with user info and gh CLI integration
{ config, pkgs, ... }:
let
cfg = import ./config.nix;
in
{
programs.git = {
enable = true;
settings = {
user.name = cfg.user.fullName;
user.email = cfg.user.email;
credential.helper = "store";
};
};
programs.gh = {
enable = true;
gitCredentialHelper = {
enable = true;
};
};
}

View File

@@ -0,0 +1,27 @@
# ABOUTME: Hypridle idle daemon configuration
# ABOUTME: Handles screen locking and DPMS after idle timeout
{ config, pkgs, ... }:
{
services.hypridle = {
enable = true;
settings = {
general = {
lock_cmd = "pidof hyprlock || hyprlock";
before_sleep_cmd = "loginctl lock-session";
after_sleep_cmd = "hyprctl dispatch dpms on";
};
listener = [
{
timeout = 300;
on-timeout = "loginctl lock-session";
}
{
timeout = 330;
on-timeout = "hyprctl dispatch dpms off";
on-resume = "hyprctl dispatch dpms on && brightnessctl -r";
}
];
};
};
}

View File

@@ -0,0 +1,17 @@
# ABOUTME: Hyprland autostart configuration
# ABOUTME: Defines programs to run at Hyprland startup
{ config, pkgs, ... }:
{
wayland.windowManager.hyprland.settings = {
exec-once = [
"hyprsunset"
"systemctl --user start hyprpolkitagent"
"wl-clip-persist --clipboard regular & clipse -listen"
];
exec = [
"pkill -SIGUSR2 waybar || waybar"
];
};
}

View File

@@ -0,0 +1,99 @@
# ABOUTME: Hyprland keybindings configuration
# ABOUTME: Defines keyboard and mouse shortcuts for window management
{ config, pkgs, ... }:
{
wayland.windowManager.hyprland.settings = {
bind = [
# Application launchers
"$mod, Space, exec, $menu"
"$mod, Return, exec, $terminal"
"$mod, E, exec, $fileManager"
"$mod, B, exec, $browser"
# Window management
"$mod, W, killactive,"
"$mod, BackSpace, killactive,"
"$mod, V, togglefloating,"
"$mod SHIFT, equal, fullscreen,"
"$mod, J, togglesplit,"
"$mod, P, pseudo,"
# Focus navigation
"$mod, left, movefocus, l"
"$mod, right, movefocus, r"
"$mod, up, movefocus, u"
"$mod, down, movefocus, d"
# Workspace switching
"$mod, 1, workspace, 1"
"$mod, 2, workspace, 2"
"$mod, 3, workspace, 3"
"$mod, 4, workspace, 4"
"$mod, 5, workspace, 5"
"$mod, 6, workspace, 6"
"$mod, 7, workspace, 7"
"$mod, 8, workspace, 8"
"$mod, 9, workspace, 9"
"$mod, 0, workspace, 10"
# Move window to workspace
"$mod SHIFT, 1, movetoworkspace, 1"
"$mod SHIFT, 2, movetoworkspace, 2"
"$mod SHIFT, 3, movetoworkspace, 3"
"$mod SHIFT, 4, movetoworkspace, 4"
"$mod SHIFT, 5, movetoworkspace, 5"
"$mod SHIFT, 6, movetoworkspace, 6"
"$mod SHIFT, 7, movetoworkspace, 7"
"$mod SHIFT, 8, movetoworkspace, 8"
"$mod SHIFT, 9, movetoworkspace, 9"
"$mod SHIFT, 0, movetoworkspace, 10"
# Workspace navigation
"$mod, comma, workspace, m-1"
"$mod, period, workspace, m+1"
# Window resize
"$mod, minus, splitratio, -0.1"
"$mod, equal, splitratio, +0.1"
# Lock screen
"$mod, Escape, exec, loginctl lock-session"
# Screenshots
", Print, exec, hyprshot -m region"
"SHIFT, Print, exec, hyprshot -m window"
"CTRL, Print, exec, hyprshot -m output"
# Color picker
"$mod SHIFT, C, exec, hyprpicker -a"
# Clipboard manager
"$mod SHIFT, V, exec, ghostty --class=clipse -e clipse"
];
bindm = [
# Mouse bindings for window management
"$mod, mouse:272, movewindow"
"$mod, mouse:273, resizewindow"
];
binde = [
# Repeatable bindings for media controls
", XF86AudioRaiseVolume, exec, wpctl set-volume -l 1.5 @DEFAULT_AUDIO_SINK@ 5%+"
", XF86AudioLowerVolume, exec, wpctl set-volume @DEFAULT_AUDIO_SINK@ 5%-"
", XF86AudioMute, exec, wpctl set-mute @DEFAULT_AUDIO_SINK@ toggle"
# Brightness controls
", XF86MonBrightnessUp, exec, brightnessctl s +5%"
", XF86MonBrightnessDown, exec, brightnessctl s 5%-"
];
bindl = [
# Media player controls
", XF86AudioNext, exec, playerctl next"
", XF86AudioPrev, exec, playerctl previous"
", XF86AudioPlay, exec, playerctl play-pause"
];
};
}

View File

@@ -0,0 +1,39 @@
# ABOUTME: Hyprland window manager home-manager configuration
# ABOUTME: Imports all hyprland submodules for complete WM setup
{ config, pkgs, lib, ... }:
let
cfg = import ../config.nix;
in
{
imports = [
./bindings.nix
./autostart.nix
./input.nix
./looknfeel.nix
./windows.nix
./envs.nix
];
wayland.windowManager.hyprland = {
enable = true;
systemd.enable = true;
settings = {
# Monitor configuration
monitor = cfg.monitors;
# Default applications
"$terminal" = "ghostty";
"$fileManager" = "nautilus";
"$browser" = "google-chrome-stable --new-window --ozone-platform=wayland";
"$menu" = "wofi --show drun";
# Mod key
"$mod" = "SUPER";
};
};
# Hyprland polkit agent for privilege escalation
services.hyprpolkitagent.enable = true;
}

View File

@@ -0,0 +1,56 @@
# ABOUTME: Hyprland environment variables configuration
# ABOUTME: Sets up Wayland, cursor, and application environment variables
{ config, lib, pkgs, osConfig ? { }, ... }:
let
cfg = import ../config.nix;
hasNvidiaDrivers = builtins.elem "nvidia" (osConfig.services.xserver.videoDrivers or []);
nvidiaEnv = [
"NVD_BACKEND,direct"
"LIBVA_DRIVER_NAME,nvidia"
"__GLX_VENDOR_LIBRARY_NAME,nvidia"
];
in
{
wayland.windowManager.hyprland.settings = {
env = (lib.optionals hasNvidiaDrivers nvidiaEnv) ++ [
"GDK_SCALE,${toString cfg.scale}"
# Cursor size and theme
"XCURSOR_SIZE,24"
"HYPRCURSOR_SIZE,24"
"XCURSOR_THEME,Adwaita"
"HYPRCURSOR_THEME,Adwaita"
# Force Wayland for applications
"GDK_BACKEND,wayland"
"QT_QPA_PLATFORM,wayland"
"QT_STYLE_OVERRIDE,kvantum"
"SDL_VIDEODRIVER,wayland"
"MOZ_ENABLE_WAYLAND,1"
"ELECTRON_OZONE_PLATFORM_HINT,wayland"
"OZONE_PLATFORM,wayland"
# Chromium Wayland support
"CHROMIUM_FLAGS,\"--enable-features=UseOzonePlatform --ozone-platform=wayland --gtk-version=4\""
# Make .desktop files available for wofi
"XDG_DATA_DIRS,$XDG_DATA_DIRS:$HOME/.nix-profile/share:/nix/var/nix/profiles/default/share"
# XCompose support
"XCOMPOSEFILE,~/.XCompose"
"EDITOR,nvim"
# GTK dark theme
"GTK_THEME,Adwaita:dark"
];
xwayland = {
force_zero_scaling = true;
};
ecosystem = {
no_update_news = true;
};
};
}

View File

@@ -0,0 +1,19 @@
# ABOUTME: Hyprland input and gesture configuration
# ABOUTME: Keyboard layout, mouse settings, and touchpad behavior
{ config, lib, pkgs, ... }:
{
wayland.windowManager.hyprland.settings = {
input = lib.mkDefault {
kb_layout = "us";
kb_options = "caps:super,compose:ralt";
follow_mouse = 1;
sensitivity = 0;
touchpad = {
natural_scroll = false;
};
};
};
}

View File

@@ -0,0 +1,89 @@
# ABOUTME: Hyprland visual appearance configuration
# ABOUTME: Window gaps, borders, animations, and decorations with nix-colors theming
{ config, pkgs, ... }:
let
palette = config.colorScheme.palette;
hexToRgba = hex: alpha: "rgba(${hex}${alpha})";
inactiveBorder = hexToRgba palette.base09 "aa";
activeBorder = hexToRgba palette.base0D "aa";
in
{
wayland.windowManager.hyprland.settings = {
general = {
gaps_in = 5;
gaps_out = 10;
border_size = 2;
"col.active_border" = activeBorder;
"col.inactive_border" = inactiveBorder;
resize_on_border = false;
allow_tearing = false;
layout = "dwindle";
};
decoration = {
rounding = 4;
shadow = {
enabled = false;
range = 30;
render_power = 3;
ignore_window = true;
color = "rgba(00000045)";
};
blur = {
enabled = true;
size = 5;
passes = 2;
vibrancy = 0.1696;
};
};
animations = {
enabled = true;
bezier = [
"easeOutQuint,0.23,1,0.32,1"
"easeInOutCubic,0.65,0.05,0.36,1"
"linear,0,0,1,1"
"almostLinear,0.5,0.5,0.75,1.0"
"quick,0.15,0,0.1,1"
];
animation = [
"global, 1, 10, default"
"border, 1, 5.39, easeOutQuint"
"windows, 1, 4.79, easeOutQuint"
"windowsIn, 1, 4.1, easeOutQuint, popin 87%"
"windowsOut, 1, 1.49, linear, popin 87%"
"fadeIn, 1, 1.73, almostLinear"
"fadeOut, 1, 1.46, almostLinear"
"fade, 1, 3.03, quick"
"layers, 1, 3.81, easeOutQuint"
"layersIn, 1, 4, easeOutQuint, fade"
"layersOut, 1, 1.5, linear, fade"
"fadeLayersIn, 1, 1.79, almostLinear"
"fadeLayersOut, 1, 1.39, almostLinear"
"workspaces, 0, 0, ease"
];
};
dwindle = {
pseudotile = true;
preserve_split = true;
force_split = 2;
};
master = {
new_status = "master";
};
misc = {
disable_hyprland_logo = true;
disable_splash_rendering = true;
};
};
}

View File

@@ -0,0 +1,31 @@
# ABOUTME: Hyprland window rules configuration
# ABOUTME: Defines per-application window behavior and layer rules
{ config, pkgs, ... }:
{
wayland.windowManager.hyprland.settings = {
windowrule = [
"suppressevent maximize, class:.*"
"tile, class:^(chromium)$"
"float, class:^(org.pulseaudio.pavucontrol|blueberry.py)$"
"float, class:^(steam)$"
"fullscreen, class:^(com.libretro.RetroArch)$"
"opacity 0.97 0.9, class:.*"
"opacity 1 1, class:^(chromium|google-chrome|google-chrome-unstable)$, title:.*Youtube.*"
"opacity 1 0.97, class:^(chromium|google-chrome|google-chrome-unstable)$"
"opacity 0.97 0.9, initialClass:^(chrome-.*-Default)$"
"opacity 1 1, initialClass:^(chrome-youtube.*-Default)$"
"opacity 1 1, class:^(zoom|vlc|org.kde.kdenlive|com.obsproject.Studio)$"
"opacity 1 1, class:^(com.libretro.RetroArch|steam)$"
"nofocus,class:^$,title:^$,xwayland:1,floating:1,fullscreen:0,pinned:0"
"float, class:(clipse)"
"size 622 652, class:(clipse)"
"stayfocused, class:(clipse)"
];
layerrule = [
"blur,wofi"
"blur,waybar"
];
};
}

View File

@@ -0,0 +1,70 @@
# ABOUTME: Hyprlock screen locker configuration with nix-colors theming
# ABOUTME: Configures lock screen appearance with fingerprint support
{ config, pkgs, nix-colors, ... }:
let
cfg = import ./config.nix;
palette = config.colorScheme.palette;
convert = nix-colors.lib.conversions.hexToRGBString;
wallpaperPath = "~/Pictures/Wallpapers/${cfg.wallpaper}";
backgroundRgb = "rgba(${convert ", " palette.base00}, 0.8)";
surfaceRgb = "rgb(${convert ", " palette.base02})";
foregroundRgb = "rgb(${convert ", " palette.base05})";
foregroundMutedRgb = "rgb(${convert ", " palette.base04})";
in
{
programs.hyprlock = {
enable = true;
settings = {
general = {
disable_loading_bar = true;
no_fade_in = false;
};
auth = {
fingerprint.enabled = true;
};
background = {
monitor = "";
path = wallpaperPath;
};
input-field = {
monitor = "";
size = "600, 100";
position = "0, 0";
halign = "center";
valign = "center";
inner_color = surfaceRgb;
outer_color = foregroundRgb;
outline_thickness = 4;
font_family = cfg.monoFont;
font_size = 32;
font_color = foregroundRgb;
placeholder_color = foregroundMutedRgb;
placeholder_text = " Enter Password 󰈷 ";
check_color = "rgba(131, 192, 146, 1.0)";
fail_text = "Wrong";
rounding = 0;
shadow_passes = 0;
fade_on_empty = false;
};
label = {
monitor = "";
text = "$FPRINTPROMPT";
text_align = "center";
color = "rgb(211, 198, 170)";
font_size = 24;
font_family = cfg.monoFont;
position = "0, -100";
halign = "center";
valign = "center";
};
};
};
}

View File

@@ -0,0 +1,23 @@
# ABOUTME: Hyprpaper wallpaper service configuration
# ABOUTME: Sets up wallpaper based on theme selection
{ config, pkgs, ... }:
let
cfg = import ./config.nix;
wallpaperPath = "~/Pictures/Wallpapers/${cfg.wallpaper}";
in
{
# Copy wallpapers to Pictures directory
home.file."Pictures/Wallpapers" = {
source = ../../../common/desktop/assets/wallpapers;
recursive = true;
};
services.hyprpaper = {
enable = true;
settings = {
preload = [ wallpaperPath ];
wallpaper = [ ",${wallpaperPath}" ];
};
};
}

View File

@@ -0,0 +1,41 @@
# ABOUTME: Mako notification daemon configuration with nix-colors theming
# ABOUTME: Configures notification appearance and behavior
{ config, pkgs, ... }:
let
palette = config.colorScheme.palette;
in
{
services.mako = {
enable = true;
settings = {
background-color = "#${palette.base00}";
text-color = "#${palette.base05}";
border-color = "#${palette.base04}";
progress-color = "#${palette.base0D}";
width = 420;
height = 110;
padding = "10";
margin = "10";
border-size = 2;
border-radius = 0;
anchor = "top-right";
layer = "overlay";
default-timeout = 5000;
ignore-timeout = false;
max-visible = 5;
sort = "-time";
group-by = "app-name";
actions = true;
format = "<b>%s</b>\\n%b";
markup = true;
};
};
}

View File

@@ -0,0 +1,7 @@
# ABOUTME: Starship prompt configuration
# ABOUTME: Enables the cross-shell prompt with default settings
{ config, pkgs, ... }:
{
programs.starship.enable = true;
}

View File

@@ -0,0 +1,32 @@
# ABOUTME: Theme definitions mapping theme names to base16 and VSCode themes
# ABOUTME: Used by vscode and other apps that need theme name mapping
{
"tokyo-night" = {
base16Theme = "tokyo-night-dark";
vscodeTheme = "Tokyo Night";
};
"catppuccin-macchiato" = {
vscodeTheme = "Catppuccin Macchiato";
};
"kanagawa" = {
base16Theme = "kanagawa";
vscodeTheme = "Kanagawa";
};
"everforest" = {
base16Theme = "everforest";
vscodeTheme = "Everforest Dark";
};
"nord" = {
base16Theme = "nord";
vscodeTheme = "Nord";
};
"gruvbox" = {
base16Theme = "gruvbox-dark-hard";
vscodeTheme = "Gruvbox Dark Hard";
};
"gruvbox-light" = {
base16Theme = "gruvbox-light-medium";
vscodeTheme = "Gruvbox Light Medium";
};
}

View File

@@ -0,0 +1,54 @@
# ABOUTME: VSCode configuration with theme extensions
# ABOUTME: Installs vim keybindings and color scheme extensions
{ config, pkgs, ... }:
let
cfg = import ./config.nix;
themes = import ./themes.nix;
theme = themes.${cfg.theme};
in
{
programs.vscode = {
enable = true;
profiles.default = {
extensions =
with pkgs.vscode-extensions;
[
bbenoist.nix
vscodevim.vim
]
++ pkgs.vscode-utils.extensionsFromVscodeMarketplace [
{
name = "everforest";
publisher = "sainnhe";
version = "0.3.0";
sha256 = "sha256-nZirzVvM160ZTpBLTimL2X35sIGy5j2LQOok7a2Yc7U=";
}
{
name = "tokyo-night";
publisher = "enkia";
version = "1.1.2";
sha256 = "sha256-oW0bkLKimpcjzxTb/yjShagjyVTUFEg198oPbY5J2hM=";
}
{
name = "kanagawa";
publisher = "qufiwefefwoyn";
version = "1.5.1";
sha256 = "sha256-AGGioXcK/fjPaFaWk2jqLxovUNR59gwpotcSpGNbj1c=";
}
{
name = "nord-visual-studio-code";
publisher = "arcticicestudio";
version = "0.19.0";
sha256 = "sha256-awbqFv6YuYI0tzM/QbHRTUl4B2vNUdy52F4nPmv+dRU=";
}
{
name = "gruvbox";
publisher = "jdinhlife";
version = "1.28.0";
sha256 = "sha256-XwQzbbZU6MfYcT50/0YgQp8UaOeQskEvEQPZXG72lLk=";
}
];
};
};
}

View File

@@ -0,0 +1,182 @@
# ABOUTME: Waybar status bar configuration with nix-colors theming
# ABOUTME: Configures system tray, workspaces, and status indicators
{ config, pkgs, nix-colors, ... }:
let
palette = config.colorScheme.palette;
convert = nix-colors.lib.conversions.hexToRGBString;
backgroundRgb = "rgb(${convert ", " palette.base00})";
foregroundRgb = "rgb(${convert ", " palette.base05})";
in
{
home.file.".config/waybar/theme.css".text = ''
@define-color background ${backgroundRgb};
* {
color: ${foregroundRgb};
}
window#waybar {
background-color: ${backgroundRgb};
}
'';
home.file.".config/waybar/style.css".text = ''
@import "./theme.css";
* {
border: none;
border-radius: 0;
min-height: 0;
font-family: CaskaydiaMono Nerd Font;
font-size: 14px;
}
#workspaces {
margin-left: 7px;
}
#workspaces button {
all: initial;
padding: 2px 6px;
margin-right: 3px;
}
#custom-dropbox,
#cpu,
#power-profiles-daemon,
#battery,
#network,
#bluetooth,
#wireplumber,
#tray,
#clock {
background-color: transparent;
min-width: 12px;
margin-right: 13px;
}
tooltip {
padding: 2px;
}
tooltip label {
padding: 2px;
}
'';
programs.waybar = {
enable = true;
settings = [
{
layer = "top";
position = "top";
spacing = 0;
height = 26;
modules-left = [ "hyprland/workspaces" ];
modules-center = [ "clock" ];
modules-right = [
"tray"
"bluetooth"
"network"
"wireplumber"
"cpu"
"power-profiles-daemon"
"battery"
];
"hyprland/workspaces" = {
on-click = "activate";
format = "{icon}";
format-icons = {
default = "";
"1" = "1";
"2" = "2";
"3" = "3";
"4" = "4";
"5" = "5";
"6" = "6";
"7" = "7";
"8" = "8";
"9" = "9";
active = "󱓻";
};
persistent-workspaces = {
"1" = [ ];
"2" = [ ];
"3" = [ ];
"4" = [ ];
"5" = [ ];
};
};
cpu = {
interval = 5;
format = "󰍛";
on-click = "ghostty -e btop";
};
clock = {
format = "{:%A %I:%M %p}";
format-alt = "{:%d %B W%V %Y}";
tooltip = false;
};
network = {
format-icons = [ "󰤯" "󰤟" "󰤢" "󰤥" "󰤨" ];
format = "{icon}";
format-wifi = "{icon}";
format-ethernet = "󰀂";
format-disconnected = "󰖪";
tooltip-format-wifi = "{essid} ({frequency} GHz)\n{bandwidthDownBytes} {bandwidthUpBytes}";
tooltip-format-ethernet = "{bandwidthDownBytes} {bandwidthUpBytes}";
tooltip-format-disconnected = "Disconnected";
interval = 3;
nospacing = 1;
on-click = "ghostty -e nmcli";
};
battery = {
interval = 5;
format = "{capacity}% {icon}";
format-discharging = "{icon}";
format-charging = "{icon}";
format-plugged = "";
format-icons = {
charging = [ "󰢜" "󰂆" "󰂇" "󰂈" "󰢝" "󰂉" "󰢞" "󰂊" "󰂋" "󰂅" ];
default = [ "󰁺" "󰁻" "󰁼" "󰁽" "󰁾" "󰁿" "󰂀" "󰂁" "󰂂" "󰁹" ];
};
format-full = "Charged ";
tooltip-format-discharging = "{power:>1.0f}W {capacity}%";
tooltip-format-charging = "{power:>1.0f}W {capacity}%";
states = {
warning = 20;
critical = 10;
};
};
bluetooth = {
format = "󰂯";
format-disabled = "󰂲";
format-connected = "";
tooltip-format = "Devices connected: {num_connections}";
on-click = "blueberry";
};
wireplumber = {
format = "";
format-muted = "󰝟";
scroll-step = 5;
on-click = "pavucontrol";
tooltip-format = "Playing at {volume}%";
on-click-right = "wpctl set-mute @DEFAULT_AUDIO_SINK@ toggle";
max-volume = 150;
};
tray = {
spacing = 13;
};
power-profiles-daemon = {
format = "{icon}";
tooltip-format = "Power profile: {profile}";
tooltip = true;
format-icons = {
power-saver = "󰡳";
balanced = "󰊚";
performance = "󰡴";
};
};
}
];
};
}

View File

@@ -0,0 +1,102 @@
# ABOUTME: Wofi application launcher configuration with nix-colors theming
# ABOUTME: Configures the drun launcher appearance and behavior
{ config, pkgs, ... }:
let
cfg = import ./config.nix;
palette = config.colorScheme.palette;
in
{
home.file.".config/wofi/style.css".text = ''
* {
font-family: '${cfg.monoFont}', monospace;
font-size: 18px;
}
window {
margin: 0px;
padding: 20px;
background-color: #${palette.base00};
opacity: 0.95;
}
#inner-box {
margin: 0;
padding: 0;
border: none;
background-color: #${palette.base00};
}
#outer-box {
margin: 0;
padding: 20px;
border: none;
background-color: #${palette.base00};
}
#scroll {
margin: 0;
padding: 0;
border: none;
background-color: #${palette.base00};
}
#input {
margin: 0;
padding: 10px;
border: none;
background-color: #${palette.base00};
color: @text;
}
#input:focus {
outline: none;
box-shadow: none;
border: none;
}
#text {
margin: 5px;
border: none;
color: #${palette.base06};
}
#entry {
background-color: #${palette.base00};
}
#entry:selected {
outline: none;
border: none;
}
#entry:selected #text {
color: #${palette.base02};
}
#entry image {
-gtk-icon-transform: scale(0.7);
}
'';
programs.wofi = {
enable = true;
settings = {
width = 600;
height = 350;
location = "center";
show = "drun";
prompt = "Search...";
filter_rate = 100;
allow_markup = true;
no_actions = true;
halign = "fill";
orientation = "vertical";
content_halign = "fill";
insensitive = true;
allow_images = true;
image_size = 40;
gtk_dark = true;
};
};
}

View File

@@ -0,0 +1,5 @@
{ pkgs, ... }:
{
# Minimal profile: reuses server.nix for basic CLI programs
imports = [ ./server.nix ];
}

View File

@@ -1,5 +1,6 @@
{ pkgs, ... }:
{
programs = {
dircolors = {
enable = true;
extraConfig = ''
@@ -326,7 +327,6 @@
enable = true;
shellAbbrs = {
fix-ssh = "eval $(tmux show-env | grep ^SSH_AUTH_SOCK | sed 's/=/ /;s/^/set /')";
diff-persist = "sudo rsync -amvxx --dry-run --no-links --exclude '/tmp/*' --exclude '/root/*' / /persist/ | rg -v '^skipping|/$'";
};
@@ -347,8 +347,12 @@
git = {
enable = true;
userEmail = "petru@paler.net";
userName = "Petru Paler";
settings = {
user = {
email = "petru@paler.net";
name = "Petru Paler";
};
};
};
home-manager = {
@@ -397,6 +401,13 @@
setw -g automatic-rename on
set -g set-titles on
# first, unset update-environment[SSH_AUTH_SOCK] (idx 3), to prevent
# the client overriding the global value
set-option -g -u update-environment[3]
# And set the global value to our static symlink'd path:
set-environment -g SSH_AUTH_SOCK $HOME/.ssh/ssh_auth_sock
'';
};
};
}

View File

@@ -0,0 +1,6 @@
{ pkgs, ... }:
{
imports = [ ./server.nix ];
# Add workstation-specific programs here if needed in the future
}

View File

@@ -1,8 +1,8 @@
{ pkgs, inputs, ... }:
{ pkgs, lib, inputs, ... }:
{
imports = [
../../common/global
../../common/cloud-node.nix
../../common/minimal-node.nix
./hardware.nix
./reverse-proxy.nix
];
@@ -12,4 +12,27 @@
networking.hostName = "alo-cloud-1";
services.tailscaleAutoconnect.authkey = "tskey-auth-kbdARC7CNTRL-pNQddmWV9q5C2sRV3WGep5ehjJ1qvcfD";
services.tailscale = {
enable = true;
useRoutingFeatures = lib.mkForce "server"; # enables IPv4/IPv6 forwarding + loose rp_filter
extraUpFlags = [ "--advertise-exit-node" ];
};
networking.nat = {
enable = true;
externalInterface = "enp1s0";
internalInterfaces = [ "tailscale0" ];
};
networking.firewall = {
enable = lib.mkForce true;
allowedTCPPorts = [ 80 443 ]; # Public web traffic only
allowedUDPPorts = [ 41641 ]; # Tailscale
trustedInterfaces = [ "tailscale0" ]; # Full access via VPN
};
services.openssh = {
settings.PasswordAuthentication = false; # Keys only
};
}

View File

@@ -1,7 +1,7 @@
{ pkgs, ... }:
{ pkgs, config, ... }:
{
environment.systemPackages = [ pkgs.traefik ];
environment.persistence."/persist".files = [ "/acme/acme.json" ];
environment.persistence.${config.custom.impermanence.persistPath}.files = [ "/acme/acme.json" ];
services.traefik = {
enable = true;
@@ -73,7 +73,7 @@
wordpress-paler-net = {
entryPoints = "websecure";
rule = "Host(`wordpress.paler.net`)";
service = "alo-cluster";
service = "varnish-cache";
};
ines-paler-net = {
@@ -117,6 +117,12 @@
rule = "Host(`musictogethersilvercoast.pt`)";
service = "varnish-cache";
};
alo-land = {
entryPoints = "websecure";
rule = "Host(`alo.land`)";
service = "varnish-cache";
};
};
};
};
@@ -135,6 +141,15 @@
.host = "100.64.229.126";
.port = "10080";
}
sub vcl_backend_response {
# default TTL if backend didn't specify one
if (beresp.ttl <= 0s) {
set beresp.ttl = 1h;
}
# serve stale content in case home link is down
set beresp.grace = 240h;
}
'';
};
}

75
hosts/beefy/default.nix Normal file
View File

@@ -0,0 +1,75 @@
{ pkgs, inputs, config, ... }:
{
imports = [
../../common/encrypted-btrfs-layout.nix
../../common/global
# Desktop environment is imported via flake.nix for desktop profile
../../common/cluster-member.nix # Consul + storage clients
../../common/cluster-tools.nix # Nomad CLI (no service)
./hardware.nix
];
diskLayout = {
mainDiskDevice = "/dev/disk/by-id/nvme-CT1000P3PSSD8_25164F81F31D";
#keyDiskDevice = "/dev/disk/by-id/usb-Intenso_Micro_Line_22080777650797-0:0";
keyDiskDevice = "/dev/sda";
};
networking.hostName = "beefy";
networking.cluster.primaryInterface = "enp1s0";
services.tailscaleAutoconnect.authkey = "tskey-auth-k79UsDTw2v11CNTRL-oYqji35BE9c7CqM89Dzs9cBF14PmqYsi";
# Console blanking after 5 minutes (for greeter display sleep)
# NMI watchdog for hardlockup detection
boot.kernelParams = [ "consoleblank=300" "nmi_watchdog=1" ];
# Netconsole - stream kernel messages to zippy (192.168.1.2)
# Must configure via configfs after network is up (interface doesn't exist at module load)
boot.kernelModules = [ "netconsole" ];
boot.kernel.sysctl."kernel.printk" = "8 4 1 7"; # Raise console_loglevel to send all messages
systemd.services.netconsole-sender = {
description = "Configure netconsole to send kernel messages to zippy";
wantedBy = [ "multi-user.target" ];
after = [ "network-online.target" ];
wants = [ "network-online.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
script = ''
TARGET=/sys/kernel/config/netconsole/target1
mkdir -p $TARGET
# Disable first if already enabled (can't modify params while enabled)
if [ -f $TARGET/enabled ] && [ "$(cat $TARGET/enabled)" = "1" ]; then
echo 0 > $TARGET/enabled
fi
echo enp1s0 > $TARGET/dev_name
echo 192.168.1.2 > $TARGET/remote_ip
echo 6666 > $TARGET/remote_port
echo c0:3f:d5:62:55:bb > $TARGET/remote_mac
echo 1 > $TARGET/enabled
'';
};
# Kdump for kernel crash analysis
boot.crashDump = {
enable = true;
reservedMemory = "256M";
};
# Lockup detectors - panic on detection so kdump captures state
boot.kernel.sysctl = {
# Enable all SysRq functions for debugging hangs
"kernel.sysrq" = 1;
# Panic on soft lockup (CPU not scheduling for >20s)
"kernel.softlockup_panic" = 1;
# Panic on hung tasks (blocked >120s)
"kernel.hung_task_panic" = 1;
"kernel.hung_task_timeout_secs" = 120;
};
# Persist crash dumps
environment.persistence.${config.custom.impermanence.persistPath}.directories = [
"/var/crash"
];
}

19
hosts/beefy/hardware.nix Normal file
View File

@@ -0,0 +1,19 @@
{
config,
lib,
pkgs,
modulesPath,
...
}:
{
imports = [ (modulesPath + "/installer/scan/not-detected.nix") ];
boot.initrd.availableKernelModules = [ "nvme" "xhci_pci" "usbhid" "usb_storage" "sd_mod" ];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-amd" ];
boot.extraModulePackages = [ ];
nixpkgs.hostPlatform = "x86_64-linux";
hardware.cpu.amd.updateMicrocode = true; # Uncomment for AMD
}

BIN
hosts/beefy/key.bin Normal file

Binary file not shown.

View File

@@ -3,16 +3,28 @@
imports = [
../../common/encrypted-btrfs-layout.nix
../../common/global
../../common/compute-node.nix
../../common/cluster-member.nix # Consul + storage clients
../../common/nomad-worker.nix # Nomad client (runs jobs)
../../common/nomad-server.nix # Consul + Nomad server mode
../../common/nfs-services-standby.nix # NFS standby for /data/services
# To promote to NFS server (during failover):
# 1. Follow procedure in docs/NFS_FAILOVER.md
# 2. Replace above line with: ../../common/nfs-services-server.nix
# 3. Add nfsServicesServer.standbys = [ "c2" ]; (or leave empty)
./hardware.nix
];
diskLayout = {
mainDiskDevice = "/dev/disk/by-id/nvme-SAMSUNG_MZVLW256HEHP-000H1_S340NX0K910298";
mainDiskDevice = "/dev/disk/by-id/nvme-KINGSTON_SNV3S1000G_50026B7383365CD3";
#keyDiskDevice = "/dev/disk/by-id/usb-Intenso_Micro_Line_22080777640496-0:0";
keyDiskDevice = "/dev/sda";
};
networking.hostName = "c1";
services.tailscaleAutoconnect.authkey = "tskey-auth-kmFvBT3CNTRL-wUbELKSd5yhuuTwTcgJZxhPUTxKgcYKF";
services.tailscaleAutoconnect.authkey = "tskey-auth-k2nQ771YHM11CNTRL-YVpoumL2mgR6nLPG51vNhRpEKMDN7gLAi";
nfsServicesStandby.replicationKeys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHyTKsMCbwCIlMcC/aopgz5Yfx/Q9QdlWC9jzMLgYFAV root@zippy-replication"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO5s73FSUiysHijWRGYCJY8lCtZkX1DGKAqp2671REDq root@sparky-replication"
];
}

View File

@@ -3,16 +3,18 @@
imports = [
../../common/encrypted-btrfs-layout.nix
../../common/global
../../common/compute-node.nix
../../common/cluster-member.nix # Consul + storage clients
../../common/nomad-worker.nix # Nomad client (runs jobs)
../../common/nomad-server.nix # Consul + Nomad server mode
./hardware.nix
];
diskLayout = {
mainDiskDevice = "/dev/disk/by-id/nvme-SAMSUNG_MZVLB256HAHQ-000H1_S425NA1M132963";
mainDiskDevice = "/dev/disk/by-id/nvme-KINGSTON_SNV3S1000G_50026B73841C1892";
#keyDiskDevice = "/dev/disk/by-id/usb-Intenso_Micro_Line_22080777650675-0:0";
keyDiskDevice = "/dev/sda";
};
networking.hostName = "c2";
services.tailscaleAutoconnect.authkey = "tskey-auth-kbYnZK2CNTRL-SpUVCuzS6P3ApJiDaB6RM3M4b8M9TXgS";
services.tailscaleAutoconnect.authkey = "tskey-auth-kQ11fTmrzd11CNTRL-N4c2L3SAzUbvcAVhqCFWUbAEasJNTknd";
}

View File

@@ -3,7 +3,10 @@
imports = [
../../common/encrypted-btrfs-layout.nix
../../common/global
../../common/compute-node.nix
../../common/cluster-member.nix # Consul + storage clients
../../common/nomad-worker.nix # Nomad client (runs jobs)
../../common/nomad-server.nix # Consul + Nomad server mode
../../common/binary-cache-server.nix
./hardware.nix
];

View File

@@ -8,7 +8,9 @@
imports = [
../../common/encrypted-btrfs-layout.nix
../../common/global
../../common/base-node.nix
../../common/workstation-node.nix # Dev tools (deploy-rs, docker, nix-ld)
../../common/cluster-member.nix # Consul + storage clients
../../common/cluster-tools.nix # Nomad CLI (no service)
./hardware.nix
];
@@ -19,40 +21,61 @@
};
networking.hostName = "chilly";
networking.cluster.primaryInterface = "br0";
services.tailscaleAutoconnect.authkey = "tskey-auth-kRXS9oPyPm11CNTRL-BE6YnbP9J6ZZuV9dHkX17ZMnm1JGdu93";
services.consul.interface.advertise = lib.mkForce "br0";
networking.useNetworkd = true;
systemd.network.enable = true;
# not useful and potentially a security loophole
services.resolved.llmnr = "false";
systemd.network.netdevs."10-br0" = {
netdevConfig = {
Name = "br0";
Kind = "bridge";
# when switching to DHCP, fill this in with value from enp1s0 or something made up starting with 02:
# MACAddress = "";
};
};
systemd.network.networks."20-enp1s0" = {
matchConfig.Name = "enp1s0";
networkConfig.Bridge = "br0";
};
systemd.network.networks."30-br0" = {
matchConfig.Name = "br0";
networkConfig = {
# TODO: use DHCP. Would need a hardcoded MAC (see above)
Address = [ "192.168.1.5/24" ];
Gateway = [ "192.168.1.1" ];
DNS = [ "192.168.1.1" ];
# DHCP = "yes";
};
};
virtualisation.libvirtd = {
enable = true;
allowedBridges = [ "br0" ];
};
systemd.services.hassos = {
description = "Home Assistant OS VM";
wantedBy = [ "multi-user.target" ];
script = ''
${pkgs.qemu}/bin/qemu-system-x86_64 -bios ${pkgs.OVMF.fd}/FV/OVMF.fd -name 'hassos' -enable-kvm -cpu host -m 16384 -smp 4 -drive 'if=virtio,file=/persist/hassos/disk-drive-sata0.raw,format=raw' -nic 'bridge,br=br0,mac=1E:DD:78:D5:78:9A' -device qemu-xhci,id=xhci -device usb-host,bus=xhci.0,vendorid=0x0658,productid=0x0200 -device usb-host,bus=xhci.0,vendorid=0x10c4,productid=0xea60 -nographic -serial telnet:localhost:4321,server=on,wait=off -monitor telnet:localhost:4322,server=on,wait=off
'';
preStop = ''
echo 'system_powerdown' | ${pkgs.netcat-gnu}/bin/nc localhost 4322
sleep 10
'';
};
environment.systemPackages = with pkgs; [
unstable.qemu
qemu
inetutils # for telnet to qemu
usbutils
virt-manager
(pkgs.writeShellScriptBin "qemu-system-x86_64-uefi" ''
qemu-system-x86_64 \
-bios ${pkgs.OVMF.fd}/FV/OVMF.fd \
"$@"
'')
];
users.users.ppetru.extraGroups = [ "libvirtd" ];
networking = {
# TODO: try using DHCP for br0. will probably need a hardcoded MAC
defaultGateway = "192.168.1.1";
nameservers = [ "192.168.1.1" ];
bridges.br0.interfaces = [ "enp1s0" ];
interfaces.br0 = {
useDHCP = false;
ipv4.addresses = [
{
"address" = "192.168.1.5";
"prefixLength" = 24;
}
];
};
};
}

Some files were not shown because too many files have changed in this diff Show More