It’s not sufficiently cruel to satisfy the ethos of a large block of our voters.
This is a secondary account that sees the most usage. My first account is listed below. The main will have a list of all the accounts that I use.
Garbage: Purple quickly jumps candle over whispering galaxy banana chair flute rocks.
It’s not sufficiently cruel to satisfy the ethos of a large block of our voters.
On Android, secure boot causes boot loader validation, kernel validation, and subsequent validation by the kernel of all application code that is loaded into the system. You need an additional bug to obtain persistent access if the code has not been signed by an authorized party.
This is why iPhone jailbreaks are bifurcated into teathered and unteathered — many modern OSs require a second bug to survive a reboot and achieve persistence. The introduced code won’t pass signature check.
Excellent point. Perhaps part of this attributes come with the monolithic design choice.
This cannot be done on most consumer OSs like Macs or Windows, or Android smartphones, because secure boot would refuse to load a modified kernel from the disk. It is possible on typical desktop Linux installations if they don’t implement secure boot.
Absolutely! It’s your computer and it should always obey you. Trouble is, the kernel doesn’t know the difference between you the human being and you the program running as root user in your service, like wpa_supplicant for example, that may be potentially open to compromise.
Perhaps, like a safety on a gun, there should be another step to inserting code into your kernel to ensure it’s being done very deliberately. We kind of see this with mokmanager for enrolling secure boot keys. Physical button presses are required to add a key and it cannot be (easily) automated software by design. You have to reboot and physically do it in the UEFI.
This is where runtime or hypervisor kernel protections make sense – in making sure the kernel is behaving under expected parameters except when we really, truly want to load new kernel code. It’s the same reason why we have syscall filtering on so many services, like OpenSSH server process pre-authentication. We don’t want the system to get confused when it really matters.
Why? Do you think there’s no value in using virtualization to enforce constraints on the runtime behavior of the kernel?
This is interesting especially in embedded, and it’s a way to make loading code harder because now the kernel may not even have a facility for loading code. This does go in the right direction. Not even root can do it if there just isn’t code to load a module.
But, many if not most popular OSs go a step further and inspect the kernel to make sure it isn’t adding code to itself to the maximum extent possible, even if it were exploited by a bug.
Precisely! It’s about making compromise expensive, multi-layered, driving up the cost so it becomes fiscally unattractive for the attacker.
The threat model is that root shouldn’t have to be a lose condition. It is certainly very bad, but there should be some things root cannot do, like modify the kernel, while still being the highest privilege level designed into the system. SELinux rules severely constrain the root user on Android for example to frustrate a total system compromise even if an attacker gains root.
The attacker must then find a way to patch the kernel to get the unconstrained root that we have today on Linux desktops.
It does appear to take an interesting approach with using VMs to separate out the system components and applications, but I don’t think it introspects into those VMs to ensure the parts are behaving correctly as seen from the outside of the VM looking in.
It’s a really cool OS I haven’t heard about before though!
One consideration is that on a Linux server, the data of interest to attackers is more likely to be accessible by some low-privileged daemon like a SQL server. Compromising the kernel in such a fundamental way doesn’t provide anything of value on its own, so defenses perhaps are not as mature along this plane. It’s enough to get to the database. You might go for the kernel to move laterally, but the kernel itself isn’t the gold and jewels.
Server environments are much more tightly controlled as you mentioned. I feel like there are more degrees of trust (or distrust) on a user system than on a server (configured top-to-bottom by an expert) for that reason and the differences in use case, and Linux desktop doesn’t really express this idea as well as maybe it should. It places a lot of trust on the user to say the least, and that’s not ideal for security.
I think secure boot is a great idea. There must be a way to have layered security without abusing it to lock out users from their owned machines.
My illustration is meant to highlight the lack of care that is taken w.r.t. kernel code compared to systems that require code signing. If some privileged process is compromised, it can simply ask the kernel to insert a module with arbitrary code. Should processes be able to do this? For many systems, the answer is no: only otherwise authenticated code can run in the kernel. No userspace process has the right to insert arbitrary code. A system with a complete secure boot implementation and signed kernel modules prevents even root from inserting an unauthorized module. Indeed, on Android on a Samsung device with RKP, unconfined root still cannot insert a kernel module that isn’t signed by Samsung. The idea of restricting even root from doing dangerous things isn’t new. SELinux uses rules to enforce similar concepts.
Yes, not being root is a useful step, but protecting the kernel from root might still be desirable and many systems try to do this. Exploits can sometimes get untrusted code running as root on otherwise reasonable secure systems. It’s nice if we can have layered security that goes beyond, so I ask: why don’t we have this today when other systems do?
Oh my gosh I’m so dumb. I’ve read the name of that tool dozens of times and never made the connection until now.
In IT it’s called the scream test. You unplug it and see who screams.
As an American:
Whyyyyy what could we possibly stand to gain from doing this, ever? This is so stupid.
Unless the goal is simply to crash the economy so billionaires can buy businesses for cheap, I really have difficulty imagining why we would ever do this.
I’m really surprised to read this. I make extensive use of this allocator on my machines, But I thought it was because there were limitations to zsmalloc. These limitations no longer appear to be the case.
You don’t wanna know where it’s been.
I’m glad it still worked. Those are pricey.
I love Caddy. So easy to configure, and the automatic SSL is almost always what I need.
If a company does business with government, Windows/Office is the only way to go. So absolutely legacy reasons. There are a lot of viable choices today. It’s amazing to think that Office is something like 20% of revenue last time I checked.