Find a file
2022-10-14 05:58:15 +01:00
configuration starbound: Fix post-update issues 2022-10-14 05:58:15 +01:00
keys sops: Improve secrets provisioning to split out staging 2022-10-12 23:22:50 +01:00
lib treewide: Refactor in order to clean up flake.nix 2022-10-14 05:58:13 +01:00
modules webserver: Use a hardened systemd unit instead of a container 2022-10-14 05:58:11 +01:00
pkgs starbound: Fix post-update issues 2022-10-14 05:58:15 +01:00
.gitignore Start reworking the server for nix flakes 2021-04-12 01:58:03 +01:00
.sops.yaml sops: Improve secrets provisioning to split out staging 2022-10-12 23:22:50 +01:00
flake.lock treewide: Refactor in order to clean up flake.nix 2022-10-14 05:58:13 +01:00
flake.nix treewide: Refactor in order to clean up flake.nix 2022-10-14 05:58:13 +01:00
LICENSE Add LICENSE 2019-11-26 23:26:10 +00:00
README.md README: Document deployment procedure 2021-04-28 00:53:05 +01:00

tlater.net server configuration

This is the NixOS configuration for tlater.net.

Testing

Building

Build the VM with:

nixos-rebuild build-vm --flake '.#vm'

Running

Note: M-2 will bring up a console for poweroff and such

Running should mostly be as simple as running the command the build script echos.

One caveat: create a larger disk image first. This can be done by running the following in the repository root:

qemu-img create -f qcow2 ./tlaternet.qcow2 20G

Everything else should be handled by the devShell.

New services

Whenever a new service is added, append an appropriate ,hostfwd=::3<port>:<port> to the QEMU_NET_OPTS specified in flake.nix to bind the service to a host port.

There is no way to test this without binding to the host port, sadly.

Deploying

Currently the deployment process is fully manual because there is no CI system.

Nix makes this fairly painless, though, it's simply:

nixos-rebuild switch --use-remote-sudo --target-host tlater.net --build-host localhost --flake .#tlaternet

This has the added benefit of running the build on the dev machine, which is 99% of the time much faster at building than the target (though artifact upload may take some time on slow connections).

Note that this also requires the current local user to also be present on the target host, as well as for this user to be in the target host's wheel group. See nix.trustedUsers.