9

Today I decided to perform a clean install for Creative Sound Blaster drivers, as they always start glitching all by themselves after some time. And that meant I had to go through the whole cleanup procedure. And that took me almost 2 hours..

And honestly, I can't see a reason why?! And although Creative, IMHO, is an absolute 1st place winner for producing poor quality software that never works, the bloat problem is not exclusive to them.

PC with Canon digital camera driver will have around 10 Canon entries that are interconnected with all sorts of connections. Visual Studio is also a prime example, there is around 50 or so entries for full install, and repairing that thing is only possible with complete nuking. And once it even managed to ruin the whole OS install when I was upgrading from VS2k8 to VS2k8SP1 or something. As it turns out 5GB of free space was not enough for 300Mb patch...

So this really seem to be a widespread problem. Almost every application nowadays usually contains unpackers, multiple spywarish "friends" that are installed, drivers are usually around 600Mb for everything including printers and so on.

But why? Is it developer fault? Applications like that are nightmare to support, they never work 100% nowadays, and almost all users I know are very negative about all that bloat they get as a mandatory driver install for USB thumb drive/Printer/Camera/Sound Card/Browser.

It seems that NSIS from Nullsoft is the only clean setup system that is not bloated, from what I know, for example, Firefox install. Clean, pretty much xcopy based install without any problems.

So why people are not using simple setups and applications that are not rooted over 30 layers of interconnection? Is it because developers are lazy? Use of codegen tools? Is it because corporations force heavyweight apps as something users will love? What's the cause, and is there a hope software will return back to basics someday? What are the steps to avoid writing bloat when you start new application from scratch?

Coder
  • 6,978

8 Answers8

10

To quote Joel in Strategy Letter IV: Bloatware and the 80/20 Myth:

[...] there are lots of great reasons for bloatware. For one, if programmers don't have to worry about how large their code is, they can ship it sooner. [...] If your software vendor stops, before shipping, and spends two months squeezing the code down to make it 50% smaller, the net benefit to you is going to be imperceptible. [...] But the loss to you of waiting an extra two months for the new version is perceptible, and the loss to the software company that has to give up two months of sales is even worse.

A lot of software developers are seduced by the old "80/20" rule. It seems to make a lot of sense: 80% of the people use 20% of the features. So you convince yourself that you only need to implement 20% of the features, and you can still sell 80% as many copies.

Unfortunately, it's never the same 20%. Everybody uses a different set of features. [...]

When you start marketing your "lite" product, and you tell people, "hey, it's lite, only 1MB," they tend to be very happy, then they ask you if it has their crucial feature, and it doesn't, so they don't buy your product.

4

Quite a large part of it is to do with dependencies of a product. Your operating system ships with a lot of standard libraries for all kinds of things. However, these standard libraries have different versions throughout the evolution of the OS, and any generic installer cannot assume that the specific version it was built against will actually be present on the OS.

Therefore the full installer will need to include the correct version of every dependency to make sure that everything will definitely work after installation, no matter what the initial state of each dependency on the target computer. This can be quite a significant bloat for certain types of applications, for example .NET-based applications that need to be deployed to Windows XP systems.

Until recently, one installer system that I worked with needed every single previous .NET version to be installed to be able to deploy the latest version, so that meant any .NET 3.5 application required installation binaries for .NET 1, 2, 2.5 and 3 ON TOP of 3.5. In this case, only the installer is bloated.

One workaround is a web installer, which downloads only those components that are actually not present on the target system - and this can be a gigantic size/bloat benefit. Of course it does limit your installations to systems that have internet connectivity.

3

I think a lot of it has to do with layer upon layer of library code. Obviously when you use a library you don't use everything in it, so that excess adds up as you include more and more libraries.

Combine that with the fact that the cost of an hour of work from a programmer is getting increasingly expensive while the processing power/storage of the typical computer is getting cheaper by the year, you see that it is actually more cost efficient this way.

JohnFx
  • 19,040
2

My guess is that there are a lot of features that someone thought was a good idea. However, if a lot of people all have separate ideas that get put together into one application this is how it can become so complicated. I wouldn't blame the developer in the case of large corporate products where there should be product managers that have a responsibility for what is in the product and how to optimize it from various perspectives.

Another side to this would be the technical debt that likely doesn't get managed well in most cases as it isn't seen as a great investment of time. I'd suspect new features and bug fixes make more sense than refactorings or other debt tasks that may appear to have little immediate business value. How often would a team of developers get a couple of weeks to clean up legacy code if the code base is rather old? My guess would be not often.

JB King
  • 16,775
2
  • "We need something to do X. Let's use library $BIGFATLIBDESIGNEDFORSOMETHINGELSE, because I used it in a different project"
  • "I think we don't need this code part anymore, but to make sure that nothing breaks, just keep it"
  • No or not enough unit tests, which lead to
  • No refactoring
  • No tracking, where settings have been stored over the years, so now the settings are everywhere
  • ...
Simon
  • 1,774
1

It is a vicious cycle where everyone in the cycles of despair can be blamed. One cycle of despair consists of the following steps:

  1. Business people asks for bloated features.
  2. Developers implement the bloated features even though they know that they shouldn't.
  3. Customers pay for bloated features even though they only want the product but not the stupid feature.
  4. Business people believe that customers want the bloated features.
  5. Go to step one and repeat.

How do you stop it? There is no easy answer on how, but it is clear that in order to stop the cycle then one of the steps has to be broken. Thus it can only be broken either by business, developers or consumers taking a revolutionary action.

Spoike
  • 14,771
0
  1. An engineer tried to optimze a module but introduced several customer issues. Then, his manager said no. Then, the engineer decided to not "make trouble" until trouble troubles him.
  2. For a complex system, the vendor already added many patches and fixed thousands of bugs to make it stable in the customer's environment. It does not have good quality from software's point of view but it works. No one wants to rewrite it to fix the same amount of bugs again.
  3. for backward compatibility reasons, even if a feature is not popular in the market, we need to keep it there.
appleleaf
  • 101
0

Its invariably laziness, that's what causes the bloat. (or the mud as in the seminal article on this subject, the Big Ball of Mud)

For example, where I work we have a "legacy" C++ application that's nevertheless quite well designed, The clients talk to an API that talk to a server that does DB work. All sensibly done. Recently we needed an additional module, but rather than implement it correctly the dev decided to implement this in .NET, and worse, he decided that accessing data via the API was too difficult (its not but...) he would make DB connections directly. So you see how this kind of mess happens. (and all with the agreement of the TA who put "quick" over "good")

I've seen this kind of thing before too - at an old place, a small part of the GUI was html, as some dev thought it a good idea to write the data in html and have the GUI display that. So 1 small part does something different to the rest.

In short, laziness is bad, and consistency is good (regardless of the technology used). I'd rather have an all-MFC application than one that is part MFC and part Winforms and part WebGL with many different back-end architectures tying it all together.

gbjbaanb
  • 48,749
  • 7
  • 106
  • 173