README.md: Update to new and improved flake mechanisms #66

Manually merged
tlater merged 1 commit from tlater/readme into master 2022-10-17 14:31:19 +01:00
Showing only changes of commit bec05bafb1 - Show all commits

View file

@ -4,21 +4,16 @@ This is the NixOS configuration for [tlater.net](https://tlater.net/).
## Testing ## Testing
### Building Run a test VM with:
Build the VM with:
``` ```
nixos-rebuild build-vm --flake '.#vm' nix run
``` ```
### Running ### Running
*Note: M-2 will bring up a console for poweroff and such* *Note: M-2 will bring up a console for poweroff and such*
Running should *mostly* be as simple as running the command the build
script echos.
One caveat: create a larger disk image first. This can be done by One caveat: create a larger disk image first. This can be done by
running the following in the repository root: running the following in the repository root:
@ -26,31 +21,18 @@ running the following in the repository root:
qemu-img create -f qcow2 ./tlaternet.qcow2 20G qemu-img create -f qcow2 ./tlaternet.qcow2 20G
``` ```
Everything else should be handled by the devShell.
### New services ### New services
Whenever a new service is added, append an appropriate Whenever a new service is added, add an appropriate port binding to
`,hostfwd=::3<port>:<port>` to the `QEMU_NET_OPTS` specified in `qemuNetOpts` in the default app.
`flake.nix` to bind the service to a host port.
There is no way to test this without binding to the host port, sadly. There is no way to test this without binding to the host port, sadly.
## Deploying ## Deploying
Currently the deployment process is fully manual because there is no Deployment is handled using
CI system. [deploy-rs](https://github.com/serokell/deploy-rs):
Nix makes this fairly painless, though, it's simply:
```bash
nixos-rebuild switch --use-remote-sudo --target-host tlater.net --build-host localhost --flake .#tlaternet
``` ```
deploy .#tlaternet
This has the added benefit of running the build on the dev machine, ```
which is 99% of the time much faster at building than the target
(though artifact upload may take some time on slow connections).
Note that this also requires the current local user to also be present
on the target host, as well as for this user to be in the target
host's wheel group. See `nix.trustedUsers`.