32

I am using tar to archive a group of very large (multi-GB) bz2 files.

If I use tar -tf file.tar to list the files within the archive, this takes a very long time to complete (~10-15 minutes).

Likewise, cpio -t < file.cpio takes just as long to complete, plus or minus a few seconds.

Accordingly, retrieving a file from an archive (via tar -xf file.tar myFileOfInterest.bz2 for example) is as slow.

Is there an archival method out there that keeps a readily available "catalog" with the archive, so that an individual file within the archive can be retrieved quickly?

For example, some kind of catalog that stores a pointer to a particular byte in the archive, as well as the size of the file to be retrieved (as well as any other filesystem-specific particulars).

Is there a tool (or argument to tar or cpio) that allows efficient retrieval of a file within the archive?

10 Answers10

21

tar (and cpio and afio and pax and similar programs) are stream-oriented formats - they are intended to be streamed direct to a tape or piped into another process. while, in theory, it would be possible to add an index at the end of the file/stream, i don't know of any version that does (it would be a useful enhancement though)

it won't help with your existing tar or cpio archives, but there is another tool, dar ("disk archive"), that does create archive files that contain such an index and can give you fast direct access to individual files within the archive.

if dar isn't included with your unix/linux-dist, you can find it at:

http://dar.linux.free.fr/

cas
  • 6,841
12

You could use SquashFS for such archives. It is

  • designed to be accessed using a fuse driver (although a traditional interface exists)
  • compressed (the larger the block size, the more efficient)
  • included in the Linux kernel
  • stores UIDs/GIDs and creation time
  • endianess-aware, therefore quite portable

The only drawback I know of is that it is read-only.

http://squashfs.sourceforge.net/ http://www.tldp.org/HOWTO/SquashFS-HOWTO/whatis.html

MauganRa
  • 221
8

While it doesn't store an index, star is purported to be faster than tar. Plus it supports longer filenames and has better support for file attributes.

As I'm sure you're aware, decompressing the file takes time and would likely be a factor in the speed of extraction even if there was an index.

Edit: You might also want to take a look at xar. It has an XML header that contains information about the files in the archive.

From the referenced page:

Xar's XML header allows it to contain arbitrary metadata about files contained within the archive. In addition to the standard unix file metadata such as the size of the file and it's modification and creation times, xar can store information such as ext2fs and hfs file bits, unix flags, references to extended attributes, Mac OS X Finder information, Mac OS X resource forks, and hashes of the file data.

ricmarques
  • 1,146
6

The only archive format I know of that stores an index is ZIP, because I've had to reconstruct corrupted indexes more than once.

womble
  • 98,245
5

Thorbjørn Ravn Anderser is right. GNU tar creates "seekable" archives by default. But it does not use that information when it reads these archives if -n option is not given. With -n option I just extracted 7GB file from 300GB archive in time required to read/write 7GB. Without -n it took more than hour and produced no result.

I'm not sure how compression affects this. My archive was not compressed. Compressed archives are not "seekable" because current (1.26) GNU tar offloads compression to external program.

2

There's also pixz. It compresses an uncompressed tar archive with xz and adds an index at the end. For any program that doesn't know about pixz the resulting file is just a .tar.xz file so many programs know how to read it, but with the pixz tool it is possible to list or extract files or directories without decompressing the entire archive.

JanKanis
  • 603
2

You could try ratarmount to mount the tar archive. Link: https://github.com/mxmlnkn/ratarmount

From the README:

Ratarmount collects all file positions inside a TAR so that it can easily jump to and read from any file without extracting it. It, then, mounts the TAR using fusepy for read access just like archivemount. In contrast to libarchive, on which archivemount is based, random access and true seeking is supported. And in contrast to tarindexer, which also collects file positions for random access, ratarmount offers easy access via FUSE and support for compressed TARs.

Capabilities:

  • Highly Parallelized: By default, all cores are used for parallelized algorithms like for the gzip, bzip2, and xz decoders. This can yield huge speedups on most modern processors but requires more main memory. It can be controlled or completely turned off using the -P option.
  • Recursive Mounting: Ratarmount will also mount TARs inside TARs inside TARs, ... recursively into folders of the same name, which is useful for the 1.31TB ImageNet data set.
  • Mount Compressed Files: You may also mount files with one of the supported compression schemes. Even if these files do not contain a TAR, you can leverage ratarmount's true seeking capabilities when opening the mounted uncompressed view of such a file.
  • Read-Only Bind Mounting: Folders may be mounted read-only to other folders for usecases like merging a backup TAR with newer versions of those files residing in a normal folder.
  • Union Mounting: Multiple TARs, compressed files, and bind mounted folders can be mounted under the same mountpoint.
  • Write Overlay: A folder can be specified as write overlay. All changes below the mountpoint will be redirected to this folder and deletions are tracked so that all changes can be applied back to the archive.
2

It doesn't index that I know of, but I use dump & restore with large files, and navigating the restore tree in interactive mode to select random files is VERY fast.

2

You can use the 7z (7zip) archive/compression format if you have access to the p7zip-full package.

On Ubuntu you can use this command to install it:

$ sudo apt-get install p7zip-full

To create an archive you can use 7z a <archive_name> <file_or_directory> and if you do not want to compress the files and want to just "store" them as-is, you can use the -mx0 option like:

$ 7z a -mx0 myarchive.7z myfile.txt

Creating archive myarchive.7z

You can then extract the files using 7z e:

$ 7z e myarchive.7z

Processing archive: myarchive.7z
Extracting  myfile.txt

Or you can list the index of the archive with the 7z l which is handy for searching with grep:

$ 7z l myarchive.7z | grep

2014-07-08 12:13:39 ....A            0            0  myfile.txt

This is also the t option to test integrity, u to add/update a file to the archive, and d to delete a file.

IMPORTANT NOTE
Do not use the 7zip format for linux filesystem backups as it does not store the owner and group of the files contained.

1

I belive GNU tar is capable of doing what you want, but I cannot locate a definitive resource saying so.

In any case you need a archiving format with an index (since that will allow you to do what you want). I do not belive ZIP-files can grow that big, unfortunately.