5

We know of Linus' law:

With enough eyeballs all bugs are shallow

In general, people seem to say that open-source software is more secure because of that very thing, but...

There are many small OSS projects with just 1 or 2 developers (the cathedral model, as described by ESR). For these projects, does releasing the source-code actually lower the security? For projects like the Linux kernel there are thousands of developers and security vulnerabilities are quite likely going to be found, but when just some few people look through the source code, while allowing crackers (black hat hackers) to see the source as well, is the security lowered instead of increased?

I know that the security advantage closed-source software has over OSS is security through obscurity, which isn't good (at all), but it could help to some degree, at least by giving those few devs some more time (security through obscurity doesn't help with the if but with the when).

EDIT: The question isn't whether OSS is more secure than non-OSS software but if the advantages for crackers are greater than the advantages for the developers who want to prevent security vulnerabilities from being exploited.

J.K.
  • 13,113
Anto
  • 11,197

4 Answers4

2

Open Source just means commonly available code/apps, that help you get started. Do not by default assume because it's open source it is good or bad code, you must always double check any code. For quality, commenting, documentation, how easy is it to upgrade/change.

Use a security checklist to identify all possible problems, and use that for whatever language/server/os you use, whether the code is open or closed source

1

Security of code is ensured only by implementing a proven security algorithm correctly. This is in fact easier to verify with open source code, i.e. bugs get sorted out faster. I believe the widely used security algorithms like SHA et al. are published, still their availability doesn't make it any easier for crackers to crack them. It is the proven mathematical theory behind them which makes them secure.

As for small open source projects, I guess it doesn't make sense to publish code anyway when it is only half baked and not functional. From the security perspective, this means it should be thoroughly reviewed, preferably by as many security expert(s) as possible, and/or against a security checklist as @crosenblum suggests.

1

I think it probably does lower security a bit, for an open source project that's so small or obscure that nobody but its authors are really looking at it. In that case, you aren't getting any help with security from a community, but if someone did for some reason target your small obscure project, they'd have an easier time since they'd have the source. Not that they'd have an impossible time without source, but having the source speeds up understanding, whether your intent is friendly or malicious.

A lot of the most common web vulnerabilities (HTML escaping issues, SQL injection, etc.) are pretty easy to check for without having source, too, so it's not like keeping things closed will mean you're safe.

I'd tend to think you're better off just doing what makes sense on the license (open or closed) independent of security concerns, and then treat security as something to worry a lot about either way.

Havoc P
  • 918
-1

You can think of Security as a state. A given piece of software has some level of security, but unless mathematically proven.. that level is unknown.

Publishing source code doesn't make your code any more or less secure, all it does is change the amount of information that is known about your software.

What publishing your code can do is make it easier for someone to exploit a vulnerability. That vulnerability is there, whether or not you publish the code. It's just as insecure. The question comes down to how much effort it takes to create and exploit.

As an example, 10+ years ago, it was very difficult to exploit buffer overflows. In fact, not much attention was given to them as a security issue because it was thought to be too difficult to exploit.

Since that time, people have figured out ways to easily exploit those vulnerabilities, and have even written software to do so automatically. Did those applications that were vulnerable to these exploits become less secure? No, they were always that insecure. All that changed was the knowledge of how to exploit it.

So, to summarize... Not publishing your code won't make you more secure, but it can make it a little more difficult for someone to exploit you. And, over time, that advantage will likely go away.