Not discrediting Open Source Software, but nothing is 100% safe.
Closed-source software is inherently predatory.
It doesn’t matter if you can read the code or not, the only options that respect your freedom are open source.
Luckily there are people who do know, and we verify things for our own security and for the community as part of keeping Open Source projects healthy.
Open source software is safe because somebody knows how to audit it.
It’s safe because there’s always a loud nerd who will make sure everyone knows if it sucks. They will make it their life mission
My very obvious rebuttal: Shellshock was introduced into bash in 1989, and found in 2014. It was incredibly trivial to exploit and if you had shell, you had root perms, which is insane.
env x=‘() { :;}; echo vulnerable’ bash -c “echo this is a test”
Though one of the major issues is that people get comfortable with that idea and assume for every open source project there is some other good Samaritan auditing it
I would argue that even in that scenario it’s still better to have the source available than have it closed.
If nobody has bothered to audit it then the number of people affected by any flaws will likely be minimal anyway. And you can be proactive and audit it yourself or hire someone to before using it in anything critical.
If nobody can audit it that’s a whole different situation though. You pretty much have to assume it is compromised in that case because you have no way of knowing.
Oh definitely, I fully agree. It’s just a lot of people need to stop approaching open source with an immediate inherent level of trust that they wouldn’t normally give to closed source. It’s only really safer once you know it’s been audited.
safe**R** not safe. Seriously how is this a hard concept.
I really like the idea of open source software and use it as much as possible.
But another “problem” is that you don’t know if the compiled program you use is actually based on the open source code or if the developer merged it with some shady code no one knows about. Sure, you can compile by yourself. But who does that 😉?
But another “problem” is that you don’t know if the compiled program you use is actually based on the open source code or if the developer merged it with some shady code no one knows about.
Actually, there is a Debian project working on exactly that problem, called reproducible builds
Many people do that…
We trust open source apps because nobody would add malicious codes in his app and then release the source code to public. It doesn’t matter if someone actually looks into it or not, but having the guts to publish the source codes alone brings a lot of trust on the developer. If the developer was shady, he would rather hide or try to hide the source code and make it harder for people to find it out.
What about the various NPM packages written by one guy. Who then moved on to other things then gave control of that package to someone else that seemed legit. Only for them to slowly add melicious code to that once trusted package that is used by a large number of other packages?
Or someone raising a pull request for a new feature or something that on the surface looks legit on its own. But when combined with other PRs or existing code ends up in a vulnerability that can be exploited.
Did you fabricate that CPU? Did you write that compiler? You gotta trust someone at some point. You can either trust someone because you give them money and it’s theoretically not in their interest to screw you (lol) or because they make an effort to be transparent and others (maybe you, maybe not) can verify their claims about what the software is.
It usually boils down to this, something can be strictly better but not perfect.
The ability to audit the code is usually strictly better than closed source. Though I’m sure an argument could be made about exposing the code base to bad actors I generally think it’s a worthy trade off.
This all or nothing attitude is boring.
No they don’t fabricate the CPU doesn’t mean thry should hand out their data to some corporation …
Trust has no place in computing.
“Trust has no place in computing” is a concept that we are still quite distant from, in practical terms.
But yeah, definitely don’t hand your personal information over to a corporation, even if they’re offering to take a lot of your money, too!
Completely missing the point. Collective action is what makes open source software accessible to everybody.
You dont NEED to be able to audit yourself. Still safer than proprietary software every way you look at it.
While I generally agree, the project needs to be big enough that somebody looks through the code. I would argue Microsoft word is safer than some l small abandoned open source software from some Russian developer
no, proprietary software its always possible malware and you have no weapon against it. being able to audit is always better.
That’s true, but I’m not a programmer and on a GitHub project with 3 stars I can’t count on someone else doing it. (Of course this argument doesnt apply to big projects like libre office) With Microsoft I can at least trust that they will be in trouble or at least get bad press when doing something malicious.
undefined> With Microsoft I can at least trust that they will be in trouble
lol yeah if anybody finds out… something something NSA
I mean if a github project has only 3 stars, it means no one is using it. Why does safety matter here? Early adopting anything has risks.
This is kind of a false comparison. If it has 3 stars then it doesn’t even qualify for this conversation as literally no one is using it.
- Yes, I do it occasionally
- You don’t need to. If it’s open source, it’s open to billions of people. It only takes one finding a problem and reporting it to the world
- There are many more benefits to open source: a. It future proofs the program (many old software can’t run on current setups without modifications). Open source makes sure you can compile a program with more recent tooling and dependencies rather than rely on existing binaries with ancient tooling or dependencies b. Remove reliance on developer for packaging. This means a developer may only produce binaries for Linux, but I can take it and compile it for MacOS or Windows or a completely different architecture like ARM c. It means I can contribute features to the program if it wasn’t the developer’s priority. I can even fork it if the developer didn’t want to merge it into their branch.
The point is not that you can audit it yourself, it’s that SOMEBODY can audit it and then tell everybody about it. Only a single person needs to find an exploit and tell the community about it for that exploit to get closed.
Exactly! I wait on someone who isn’t an idiot like me to say, “ok, so here’s what’s up guys.”
The difference is, though, if you care enough, you have the capability of finding out what’s in the code, the only limiters at that point are yourself
In closed source, you just have to trust the publisher and developers
no , but I know a bunch of passionate geek are doing it.
Also, recompile the source code yourself if you think the author is pulling a fast one on you.
is there not a way to check if thw sourvw and releasw arent the same? would be cool if github / gitlab / etc… produced a version automatically or there was some instant way to check
“Transparent and accountable government is a waste of time because I personally don’t have the time to audit every last descision.”
OP, you are paranoid beyond belief.
It’s also better than obfuscated code that nobody knows is doing shit regardless of if it is looked into or not.
Open source is the future.
Here is my quick guide to audit code.
Step one. Google is the code safe.
Step two. Find out that the repo is actually by me. Step three. Consider it unsafe.