OP asked for 2TB and that’s easily done with SSD these days
OP asked for 2TB and that’s easily done with SSD these days
No, but Vaultwarden is the one thing I don’t even try to connect to authentik so a breach of the auth password won’t give away everything else
I’ve had similar problems with some other Fedi service once and it was indeed a permission problem. Good luck!
I used to host kanboard for a while, maybe I should set it up again for my homelab
Interesting, I think I should do the same for the services that are only used to people real close.
I’ll just start! Personally, I’m tinkering with my local network to create a subnet for my homelab.
I want to set up Lemmy and Audiobookshelf next, but I want to tweak the infrastructure a bit before hosting more stuff.
Before the firewall thing, I set up authentik and am integrating it in more services. Migration was mostly straightforward so far in Bookstack and Paperless. Also the proxy authentication is pretty cool, finally being able to ditch basic auth in Prometheus was cool.
I choose depending on whether I’ll ever have to touch the files in the volume (e.g. for configuration), except for debugging where I spawn a shell. If I don’t need to touch them, I don’t want to see them in my config folder where the compose file is in. I usually check my compose folders into git, and this way I don’t have to put the volumes into gitignore.
Recently set up cwa, mostly to have an easier way to get my books on my e-reader since koreader supports opds. It’s been super easy so far and has a great interface, like it way better thenz calibre desktop.
Same, always checking if I missed something on my own stuff :)
If you’re using Prometheus, Blackbox exporter checks cert expiration as well
Have you tried to automate it?
Probably because of the Circle A in the thumbnail
you will have a much easier time setting up database and networking, running backups, porting your infrastructure to other providers, and maintaining everything, than with legacy control panels or docker compose.
I really don’t see this. Database? Same but needs a service. Networking? Services and namespaces instead of docker networks. Backups? Basically same as Docker but k8s has cronjobs so you can have it at the same place as your other stuff which is a good point. Porting infrastrutcture? Copy compose file, env files and volumes vs. copying all resources and pv.
I am absolutely not against self hosting in k8s and if IP already had k8s running, I’d recommend it too. But I don’t see the benefits for the scenario op described.
You might be right with the better/more accessible docker docs everywhere being the main reason it’s so popular, but it’s also usually just one file that describes everything AND is usually the supported install method of many projects where helm charts are often third party and lack configurability.
CNPG is cool, but then OP also needs to learn about operators and custom resources :) More efficient? Yes. More complex? Also yes.
The biggest challenge for kubernetes is probably that the smaller applications don’t come with example configs for Kubernetes. I only see mastodon having one officially. Still, I’ve provided my config for Lemmy, and there are docker containers available for Friendica and mbin (though docker isn’t officially supported for these two). I’m happy to help give yaml examples for the installation of the applications.
As said above, I agree it’s one challenge, but added complexity is not to underestimate.
Completely off topic: Your post did make me think about running my own cluster again though. I also work on k8s at my devops dayjob but with a cloud provider it’s not the same than running your own ofc. I’ve also been thinking about tinkering with old smartphones in that potential cluster…
Don’t you think recommending k8s to someone who just wants to run some services, which partly don’t even have k8s support/helmcharts on the same machine is a bit too much? Compared to docker compose or whatever op is using, it’s way more complex if you’re not already familiar with kubernetes resources.
I don’t know much about k3s in particular admittedly, but I wouldn’t recommend k8s for this unless op just wants to use it as a lab.
You need different Subdomains as you suggested in your first paragraph. And add a reverse proxy like nginx or caddy to the machine which then proxies the different subdomains to the respective services (e.g. lemmy.your.site to localhost:2222, mbin.your.site to localhost:3333).
Theoretically, you could put a landing page behind some SSO/iam like authentik, and then link to the subdomains from the landing page, but eventually users will need be on the subdomain to use a specific site.
Is your current setup up to date?
Yeah, I feel like exposing ports 80 and 443 towards an up to date nginx/whatever is referred to as a super dangerous thing in this community and also the selfhosted subreddit. Recommending cloudflare is almost the default, which I find a bit sad given many people selfhost to escape the reliance on big monopolist companies.
One can add different layers of security of course, but having nginx with monitoring in it’s own VM without keys to jump to another VM is enough of risk mitigation for me.
I think the best thing of reddit is them having so many actually active niche subreddits. Many people saying Lemmy doesn’t need to grow don’t seem to care much about that which surprises me a bit.
Op mentioned pixelfed for several people though, is it possible to reverse proxy through tailscale from a VPS or similar? It’s probably not suitable to have a service for several people behind a vpn
I don’t?