16

We have a set of directories containing lucene indexes. Each index is a mix of different file types (differentiated by extension) eg:

0/index/_2z6.frq
0/index/_2z6.fnm
..
1/index/_1sq.frq
1/index/_1sq.fnm
..

(it's about 10 different extensions)

We'd like to get a total by file extension, eg:

.frq     21234
.fnm     34757
..

I've tried various combinations of du/awk/xargs but finding it tricky to do exactly this.

Ladadadada
  • 27,207
barnybug
  • 293

9 Answers9

22

For any given extension you an use

find /path -name '*.frq' -exec ls -l {} \; | awk '{ Total += $5} END { print Total }'

to get the total file size for that type.

And after some thinking

#!/bin/bash

ftypes=$(find . -type f | grep -E ".*\.[a-zA-Z0-9]*$" | sed -e 's/.*\(\.[a-zA-Z0-9]*\)$/\1/' | sort | uniq)

for ft in $ftypes
do
    echo -n "$ft "
    find . -name "*${ft}" -exec ls -l {} \; | awk '{total += $5} END {print total}'
done

Which will output the size in bytes of each file type found.

user9517
  • 117,122
6

With bash version4, you just need to call find, ls and awk not necessary:

declare -A ary

while IFS=$'\t' read name size; do 
  ext=${name##*.}
  ((ary[$ext] += size))
done < <(find . -type f  -printf "%f\t%s\n")

for key in "${!ary[@]}"; do 
  printf "%s\t%s\n" "$key" "${ary[$key]}"
done
4

Every second column splited by . and last part (extension) saved in array.

#!/bin/bash

find . -type f -printf "%s\t%f\n" | awk '
{
 split($2, ext, ".")
 e = ext[length(ext)]
 size[e] += $1
}

END{
 for(i in size)
   print size[i], i
}' | sort -n

then you got every extensions total size in bytes.

60055 gemspec
321991 txt
2075312 html
2745143 rb
13387264 gem
47196526 jar
1

Extending on Iain's script with a faster version for working with a large number of files.

#!/bin/bash

ftypes=$(find . -type f | grep -E ".*\.[a-zA-Z0-9]*$" | sed -e 's/.*\(\.[a-zA-Z0-9]*\)$/\1/' | sort | uniq)

for ft in $ftypes
do
    echo -ne "$ft\t"
    find . -name "*${ft}" -exec du -bcsh '{}' + | tail -1 | sed 's/\stotal//'
done
MilesF
  • 33
0

For macOS:

#!/bin/bash

for ft in $(find "$1/" -type f | { export GREP_OPTIONS="--color=never" && grep -E ".*\.[a-zA-Z0-9]*$"; } | sed -E 's/.*(\.[^\.]*)$/\1/' | sort | uniq)
do
    find "$1/" -name "*$ft" -exec stat -f%z {} \; | awk '{total += $1} END {printf "%s\t",total}'
    echo " $ft"
done | sort -hr
$ bash temp.sh assets
1622995  .monstertype
1279175  .frames
756855   .npctype
706087   .projectile
573611   .head
Nakilon
  • 128
0

This is the solution :

find . -type f | grep -E ".*\.[a-zA-Z0-9]*$" | sed -e 's/.*\(\.[a-zA-Z0-9]*\)$/\1/' | sort | uniq -c | sort -n

Solution posted originally on this post : Get all extensions and their respective file count in a directory

0

I solved using this two commands:

FILES=$(find . -name '*.c')
stat -c %s ${FILES[@]} | awk '{ sum += $1 } END { print ".c" " " sum }'
c4f4t0r
  • 5,491
0

my version of answer to the question:

#!/bin/bash

date >  get_size.log
# Lists all files
find . -type f -printf "%s\t%f\n" | grep -E ".*\.[a-zA-Z0-9]*$" | sort -h | awk  '
{
        split($2, ext, ".")
        e = ext[length(ext)]
        # Checks that one extension could be found
        if(length(e) < length($2)) {
                # Check that file size are bigger than 0
                if($i > 0) {
                        # Check that extension not are integer
                        if(!(e ~/^[0-9]+$/)) {
                                size[e] += $1
                        }
                }
        }
        if(length(e) == length($2)) {
                size["blandat"] += $1
        }
}

END{
 for(i in size)
   print size[i], i
}' | sort -n >> get_size.log
echo
echo
echo The result are in file get_size.log
PelleH
  • 1
0

Try Crab (http://etia.co.uk/) - it's a command-line utility that allows you to query the filesystem using SQL.