19

Coming from the world of C and C++, most build system have an install target, notably Makefiles (where it is recommended by GNU for example) or CMake. This target copies the runtime files (executables, libraries, ...) in the operating system (for example, in C:\Program Files\ on Windows).

This feels really hacky, since for me it is not the responsibility of the build system to install programs (which is actually the responsibility of the operating system / package manager). It also means the build system or build script must know the organization of installed programs, with environment variables, registry variables, symlinks, permissions, etc.

At best, build systems should have a release target that will output an installable program (for example .deb or .msi), and then kindly ask the operating system to install that program. It would also allow the user to uninstall without having to type make uninstall.

So, my question: why do build system usually recommend having an install target?

Synxis
  • 307

6 Answers6

25

Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don't have package managers. Plus, there are systems where make install actually is the preferred way of managing packages.

Jörg W Mittag
  • 104,619
5

A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s.

However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc.... These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable.

Try to use strings(1) on most executables of your system. You'll find out which file paths are known to it.

BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged.

FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be "installed"; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable.

3

There are several reasons which come to mind.

  • Many package creating software - the Debian build system for example, and IIRC rpm as well - already expect from the building script to "install" the program to some special subdirectory. So it is driven by backward compatibility in both directions.
  • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.
  • There may still be environments which do not have packages.
max630
  • 2,605
1

One reason not mentioned is there's a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using.

Let's say you're using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don't want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let's you install the modified 1.3.0 software without having to worry about creating a package and install it on your system.

Dom
  • 169
1

Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance.

This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs.

To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience.

This is one of the "top of the food chain" problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others.

In the case of software installation management, the package manager is this component, and it will run the package's build system, then take the output through a convenient interface ("files in a directory after an installation step"), generate a package and prepare it for upload to a repository.

The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both.

You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages.

With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work.

1

Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It's better for the application developer to write a script to properly install the application he's written.

He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it's become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy.

JoL
  • 119