I agree the question here is not so much which distro but which browser.
Todays low-end laptops often come with 8 GB of RAM. Even common phones have more than 2 GB of RAM.
I agree the question here is not so much which distro but which browser.
Todays low-end laptops often come with 8 GB of RAM. Even common phones have more than 2 GB of RAM.
Because of the curve for cyclists? That’s fine.
It’s an upgrade from the old condition of no protection at all.
Until it gets a security audit, I’ll stick with Signal.
My town’s subreddit just started a policy to disallow links to X for similar reasons.
There is a movement to avoid the platform.
Spent hours tracking down a test failure process and introduced a new bug in the process.
I use KDE Connect for laptop to desktop transfers.
Tried it a couple times. Went back to the CLI.
If you know the CLI or are willing to learn, the GUI is yet-another layer for bugs to exist.
I think there is a catch-22.
pg_dump needs to connect to a running PostgreSQL instance.
But if you upgrade the binaries and try to start up, you can’t because the old data format doesn’t work. Because you can’t start up, pg_dump can’t connect.
I’ve spend more than a decade supporting both Postgres and MongoDB in production.
While they each have quirks, I prefer the quirks of Postgres.
I just spent a massive amount of time retooling code to deal with a MongoDB upgrade. The code upgrade is so complex because that’s where the schema is defined. No wonder MongoDB upgrades are easier— the database has externalized a lot of complexity that now becomes some coders problem to deal with.
For minor version upgrades, the database remains binary compatible. Nothing to do.
The dump/restore required during major upgrades allows format changes which enable new features and performance improvements without dragging around cruft forever to stay backwards compatible.
For professionals running PostgreSQL clusters in production there is a way to cycle in the new server version with zero user-visible downtime.
ANBERNIC RG40XX H with Knulli is nice too.
Discussions about hosting on your hardware is more likely to be discussed as “homelab”.
Did see that recently LA kept sending out orders to evacuate and didn’t know the cause or how to stop them?
It doesn’t improve security much to host your reverse proxy outside your network, but it does hide your home IP if you care.
If your app can exploited over the web and through a proxy it doesn’t matter if that proxy is on the same machine or over the network.
I would recommend automating only daily security updates, not all updates.
Ubuntu and Debian have “unattended-upgrades” for this. RPM-based distros have an equivalent.
120 MB? That’s more than a ZipDisk!
I knew I attended a well-funded modern college because all the computers had been upgraded with ZipDrives.
You mean the cup holder?
Right, it was an example of a pattern. In that case, -p could be used.
Yeah, not sure how I missed this one!
Exactly. It’s not just downtime to worry about, either. It’s disks filling up. It’s hardware failure. It’s DNS outages. It’s random DDoS attacks. It’s automated scans of the internet targeting WordPress. It’s OS, php and database upgrades. It’s setting up graphing, monitoring, alerting and being on-call 24/7 to deal with the issues that come up.
If these businesses are at all serious, pay for professional hosting and spend your time running the business.