README.md: Update to new and improved flake mechanisms

pull/66/head
Tristan Daniël Maat 2022-10-17 14:29:56 +01:00
parent 59a44261b8
commit bec05bafb1
Signed by: tlater
GPG Key ID: 49670FD774E43268
1 changed files with 8 additions and 26 deletions

View File

@ -4,21 +4,16 @@ This is the NixOS configuration for [tlater.net](https://tlater.net/).
## Testing ## Testing
### Building Run a test VM with:
Build the VM with:
``` ```
nixos-rebuild build-vm --flake '.#vm' nix run
``` ```
### Running ### Running
*Note: M-2 will bring up a console for poweroff and such* *Note: M-2 will bring up a console for poweroff and such*
Running should *mostly* be as simple as running the command the build
script echos.
One caveat: create a larger disk image first. This can be done by One caveat: create a larger disk image first. This can be done by
running the following in the repository root: running the following in the repository root:
@ -26,31 +21,18 @@ running the following in the repository root:
qemu-img create -f qcow2 ./tlaternet.qcow2 20G qemu-img create -f qcow2 ./tlaternet.qcow2 20G
``` ```
Everything else should be handled by the devShell.
### New services ### New services
Whenever a new service is added, append an appropriate Whenever a new service is added, add an appropriate port binding to
`,hostfwd=::3<port>:<port>` to the `QEMU_NET_OPTS` specified in `qemuNetOpts` in the default app.
`flake.nix` to bind the service to a host port.
There is no way to test this without binding to the host port, sadly. There is no way to test this without binding to the host port, sadly.
## Deploying ## Deploying
Currently the deployment process is fully manual because there is no Deployment is handled using
CI system. [deploy-rs](https://github.com/serokell/deploy-rs):
Nix makes this fairly painless, though, it's simply:
```bash
nixos-rebuild switch --use-remote-sudo --target-host tlater.net --build-host localhost --flake .#tlaternet
``` ```
deploy .#tlaternet
This has the added benefit of running the build on the dev machine, ```
which is 99% of the time much faster at building than the target
(though artifact upload may take some time on slow connections).
Note that this also requires the current local user to also be present
on the target host, as well as for this user to be in the target
host's wheel group. See `nix.trustedUsers`.