![](/static/61a827a1/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
I assume you are referring to Filesystem Snapshotting? For what reason do you want to do that on the client and not on the FS host?
I assume you are referring to Filesystem Snapshotting? For what reason do you want to do that on the client and not on the FS host?
sshfs is somewhat unmaintained, only “high-impact issues” are being addressed https://github.com/libfuse/sshfs
I would go for NFS.
HA had 2 security audits. I would not worry too much. Always depends on what you can control with it. https://www.home-assistant.io/blog/2023/10/19/security-audits-of-home-assistant/
Lennart Poettering intends to replace “sudo” with #systemd’s run0. Here’s a quick PoC to demonstrate root permission hijacking by exploiting the fact “systemd-run” (the basis of uid0/run0, the sudo replacer) creates a user owned pty for communication with the new “root” process.
To my understanding that actually solves issues. A lot of ppl already prefer other tools like doas since sudo is basically “too big” for what it does.
More code means more potential bugs. run0 has to my knowledge significantly less code. And the benefit of not relying on SUID.
In the end, you do you. The big distros will adopt what is good for them and good to maintain. You do not have to use it.
Just subscribe to the release channel. That varies from OS to OS or Software, but is worth it.
Use tools that are universal. For example, I have not used TrueNAS Scale because they did not support native docker at the time. OS specific solutions are more likely to break then universal once (truecharts vs docker)
To get up and running again after a complete failure i can just download the latest config and data from my backup and set up any distro that supports docker and my system is running again.
I do OS upgrades when they are available, usually within 1 or 2 days and containers are updated with watchtower daily.
The main difference i would say is the development and licensing model. Photo prism is forcing ppl who want to commit to sign a CLA to.give away their rights. Also the community is not really active it is mainly one dev that can change the code license on any given time.
Immich does not have such an agreement and has a huge active contributor community around it. Also Immich is backed by Futo which has its pros and cons.
Imho the biggest pain in self hosting is when a foss product turns evil towards its community and start to practice anti consumer/free selfhosters business practices.
Immich is far less likely to turn evil.
Edit: I think it is the biggest pain cause you have to migrate every device and person to the new service.
How are those devices affected by having no notification anymore? The manual labor exists anyway.
Most network switches and devices have a web gui to switch them out. Those can be automated.
Then swap you nameservers to a DNS provider that allows that?
So I’d like to split my passwords file into multiple “files”, where the unimportant logins are permanently unlocked for convenience, while the more sensitive login credentials remain encrypted until I actually need them.
And how should that protect you against an attack that has compromised your system? If the system is compromised, then an additional lock does not hinder the attacker to wait until you open it.
The tweet he commented on was indeed a nice idea, but a CEO should have more foresight that the things Trump stated in it would not be true. When you look at it now, it looks like it was more or less a threat that led to a closer relationship between “tech bros” and the current administration instead of the “take down” of them.
Immich requires to be run on a server to function, but a lot of (or even all) of its functions are things that could reasonably done entirely on-device. Aves combined with some automatic backup solution such as Nextcloud gets (from what I can tell) most of the functionality Immich offers.
How would you backup Immich on device?
And if you backup to Nextcloud than you already have a served?
So you are arguing that having a file server is enough? And processing is done on client side?
That would be in this case very inefficient.
I could come up with other points but this should give you an idea. Yes, for some use cases a server-client approach does not make sense but for a dedicated photo backup and indexer it absolutely does.
I am fully backing up my Mail Server with some exclusions like /tmp etc. with restic now for over a year, including updated binaries and docker images etc. and have about 16GB of data with hourly backups for over a year.
If you use any kind of deduplication and or compression, the system files do not amount to any meaningful size (assuming there is no additional encryption on the VM disks). Especially when you consider the size of OPs data, 1,5TB, then the couple of GB of system binaries etc. do not really matter.
Depends on the root setting. And depends on your goal. What is the purpose of the proxy? I doubt that it is easy to bypass, but you still could run a Proxy or VPN as user, this would not bypass the proxy but any filtering/blocking would not be possible. Etc
You can simply just download a binary and run it.
I’m also of the opinion that if a bad actor capable of navigating the linux file system and getting my information from it has physical access to my disk, it’s game over anyway.
I am sorry but that is BS. Encryption is not easy to break like in some Movies.
If you are referring to that a bad actor breaks in and modifies your hardware with for example a keylogger/sniffer or something then that is something disk encryption does not really defend against.
With initramfs and dropbear you can make the password prompt accessible over ssh, so you can enter the password from anywhere.
Edit: For debian it is something like
Full disk encryption on everything. My Servers, PCs etc. Gives me peace of mind that my data is safe even when the device is no longer in my control.
That’s also my take on the topic and that is also what backblazes data is suggesting. Would they not believe it themself then they would stop buying unreliable drives. Every brand can have outlier models that have a bad failure rate.
If i understand you correctly, your Server is accessing the VM disk images via a NFS share?
That does not sound efficient at all.