Here are a few things that apparently need to be stated:
Any code that is distributed can be audited, closed or open source.
It is easier to audit open source code because, well, you have the source code.
Closed source software can still be audited using reverse engineering techniques such as static analysis (reading the disassembly) or dynamic analysis (using a debugger to walk through the assembly at runtime) or both.
Examples of vulnerabilities published by independent researchers demonstrates 2 things: people are auditing open source software for security issues and people are in fact auditing closed source software for security issues
Vulnerabilities published by independent researchers doesn’t demonstrate any of the wild claims many of you think they do.
No software of a reasonable size is 100% secure. Closed or open doesn’t matter.
Closed source software can still be audited using reverse engineering techniques such as static analysis (reading the disassembly) or dynamic analysis (using a debugger to walk through the assembly at runtime) or both.
How are you going to do that if it’s software-as-a-service?
See the first bullet point. I was referring to any code that is distributed.
Yeah, there’s no way to really audit code running on a remote server with the exception of fuzzing. Hell, even FOSS can’t be properly audited on a remote server because you kind of have to trust that they’re running the version of the source code they say they are.
That’s not universally true, at least if you’re not on the same LAN. For example, most small-scale apps hosted on VPSs are typically configured with a public-facing SSH login.
As you increase the complexity of a system, it makes sense that your chance of vulnerability increases. End of the day, open source or not, you will never beat basic algorithm principals and good coding practice.
I would however argue that just because closed source code is possibly reversed doesn’t mean it’s easier or as reliable as having the source code. As long as corporations have an interest in possession there will always be someone striving and spending ungodly amounts of money to keep their castle grounds gated heavily which makes securing them en mass much harder and slower
Second bullet point, it’s much easier to audit when you have the source code. Just wanted to point out it’s not impossible to audit closed source software. It’s just more time consuming and fewer people have the skills to do so.
Also, just because you can see the source code does not mean it has been audited, and just because you cannot see the source code does not mean it has not been audited. A company has a lot more money to spend on hiring people and external teams to audit their code (without needing to reverse engineer it). More so than some single developer does for their OSS project, even if most of the internet relies on it (see openssl).
And just because a company has the money to spend on audits doesn’t mean they did, and even when they did, doesn’t mean they acted on the results. Moreover, just because code was audited doesn’t mean all of the security issues were identified.
Yup, all reasons why it does not matter if the software is open or closed as to how secure it might be. Both open and closed source code can be developed in a more or less secure fashion. Just because something could be done does not mean it has been done.
Nah I wouldn’t say that. Especially if you consider privacy a component to security. The fact that a piece of software can more easily be independently reviewed, either by you or the open source community at large, is something I value.
Good security is a component to privacy. But you can have good security with no privacy - that is the whole idea of a surveillance state (which IMO is a horrifying concept). Both are worth having, but my previous responses were only about the security aspect of OSS. There are many other good arguments to have about the benefits of OSS, but increased security is not a valid one.
A lot of bad takes in here.
Here are a few things that apparently need to be stated:
How are you going to do that if it’s software-as-a-service?
See the first bullet point. I was referring to any code that is distributed.
Yeah, there’s no way to really audit code running on a remote server with the exception of fuzzing. Hell, even FOSS can’t be properly audited on a remote server because you kind of have to trust that they’re running the version of the source code they say they are.
You can always brute force the SSH login and take a look around yourself. If you leave an apology.txt file in /home, I’m sure the admin won’t mind.
Lol, unlikely SSH is exposed to the net. You’ll probably need an RCE in the service to pop a shell.
That’s not universally true, at least if you’re not on the same LAN. For example, most small-scale apps hosted on VPSs are typically configured with a public-facing SSH login.
Ohhh, code that is distributed. The implication of that word flew over my head lmao, thanks for the clarification.
Very good points here, especially your last point
As you increase the complexity of a system, it makes sense that your chance of vulnerability increases. End of the day, open source or not, you will never beat basic algorithm principals and good coding practice.
I would however argue that just because closed source code is possibly reversed doesn’t mean it’s easier or as reliable as having the source code. As long as corporations have an interest in possession there will always be someone striving and spending ungodly amounts of money to keep their castle grounds gated heavily which makes securing them en mass much harder and slower
I agree, it takes longer to audit closed source software. Just wanted to point out it’s not impossible, as long as you have a binary.
deleted by creator
Second bullet point, it’s much easier to audit when you have the source code. Just wanted to point out it’s not impossible to audit closed source software. It’s just more time consuming and fewer people have the skills to do so.
Also, just because you can see the source code does not mean it has been audited, and just because you cannot see the source code does not mean it has not been audited. A company has a lot more money to spend on hiring people and external teams to audit their code (without needing to reverse engineer it). More so than some single developer does for their OSS project, even if most of the internet relies on it (see openssl).
And just because a company has the money to spend on audits doesn’t mean they did, and even when they did, doesn’t mean they acted on the results. Moreover, just because code was audited doesn’t mean all of the security issues were identified.
Yup, all reasons why it does not matter if the software is open or closed as to how secure it might be. Both open and closed source code can be developed in a more or less secure fashion. Just because something could be done does not mean it has been done.
Nah I wouldn’t say that. Especially if you consider privacy a component to security. The fact that a piece of software can more easily be independently reviewed, either by you or the open source community at large, is something I value.
Good security is a component to privacy. But you can have good security with no privacy - that is the whole idea of a surveillance state (which IMO is a horrifying concept). Both are worth having, but my previous responses were only about the security aspect of OSS. There are many other good arguments to have about the benefits of OSS, but increased security is not a valid one.