2

I have a 4Tb disk with 1 xfs partition (sda1). I wanted to copy almost all the data there (2,8Tb from the 3,6Tb used) into a new disk (sdc1). First I have prepared sdc in the same way sda:

parted -l

  Model: ATA WDC WD40EZRX-00S (scsi)
  Disk /dev/sda: 4001GB
  Sector size (logical/physical): 512B/4096B
  Partition Table: gpt
  Disk Flags: 

  Number  Start   End     Size    File system  Name     Flags
   1      1049kB  4001GB  4001GB  xfs          primary

  ...

  Model: ATA ST4000DM000-1F21 (scsi)
  Disk /dev/sdc: 4001GB
  Sector size (logical/physical): 512B/4096B
  Partition Table: gpt
  Disk Flags: 

  Number  Start   End     Size    File system  Name     Flags
   1      1049kB  4001GB  4001GB  xfs          primary

Then, I use rsync to copy 2.8Tb from sda1 to sdc1, but I run out of space in sdc1:

df -h
  Filesystem      Size  Used Avail Use% Mounted on
  /dev/sdc1       3.7T  3.7T   20K 100% /home/alexis/STORE
  /dev/sda1       3.7T  3.6T   52G  99% /home/alexis/OTHER                   

What is happening?. Here I post some output that I collected. Consider in your answer that I'm only giving this data because I'm just guessing, but I don't know what it really means (I would like to know!). For instance, I noted a difference in sectsz, but nothing changes in parted -l ... What does this mean? I also noted the difference in the number of nodes.... Why?

Thanks a lot!

df -i
  Filesystem        Inodes   IUsed     IFree IUse% Mounted on
  /dev/sdc1         270480  270328       152  100% /home/alexis/STORE
  /dev/sda1      215387968  400253 214987715    1% /home/alexis/OTHER


xfs_info STORE
  meta-data=/dev/sdc1              isize=256    agcount=4, agsize=244188544 blks
           =                       sectsz=4096  attr=2, projid32bit=1
           =                       crc=0        finobt=0
  data     =                       bsize=4096   blocks=976754176, imaxpct=5
           =                       sunit=0      swidth=0 blks
  naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
  log      =internal               bsize=4096   blocks=476930, version=2
           =                       sectsz=4096  sunit=1 blks, lazy-count=1
  realtime =none                   extsz=4096   blocks=0, rtextents=0

xfs_info OTHER/
  meta-data=/dev/sda1              isize=256    agcount=4, agsize=244188544 blks
           =                       sectsz=512   attr=2, projid32bit=0
           =                       crc=0        finobt=0
  data     =                       bsize=4096   blocks=976754176, imaxpct=5
           =                       sunit=0      swidth=0 blks
  naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
  log      =internal               bsize=4096   blocks=476930, version=2
           =                       sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none                   extsz=4096   blocks=0, rtextents=0




hdparm -I /dev/sdc | grep Physical
        Physical Sector size:                  4096 bytes
hdparm -I /dev/sda | grep Physical
        Physical Sector size:                  4096 bytes

EDIT

This is not a duplicate of Unable to create files on large XFS filesystem. I have 2 similar disks, I don't have space, neither inodes, and I never increase the sized of any partition.

To my other quesstions, I add this one: Why my two partitions have different number of inodes if I used the same procedure (parted, mkfs.xfs) to create them?

EDIT2

Here the allocation-group usage:

xfs_db -r -c "freesp -s -a 0" /dev/sdc1
   from      to extents  blocks    pct
      1       1      20      20   2.28
      2       3      26      61   6.96
      4       7      31     167  19.06
      8      15      35     397  45.32
     16      31      12     231  26.37
total free extents 124
total free blocks 876
average free extent size 7.06452

xfs_db -r -c "freesp -s -a 0" /dev/sda1
   from      to extents  blocks    pct
      1       1      85      85   0.00
      2       3      68     176   0.01
      4       7     438    2487   0.10
      8      15     148    1418   0.06
     16      31      33     786   0.03
     32      63      91    4606   0.18
     64     127      94    9011   0.35
    128     255      16    3010   0.12
    256     511       9    3345   0.13
    512    1023      18   12344   0.49
   1024    2047      10   15526   0.61
   2048    4095      72  172969   6.81
   4096    8191      31  184089   7.25
   8192   16383      27  322182  12.68
  16384   32767      15  287112  11.30
 262144  524287       2  889586  35.02
 524288 1048575       1  631150  24.85
total free extents 1158
total free blocks 2539882
average free extent size 2193.34
alexis
  • 131

1 Answers1

4

You're out of inodes.

df -i
  Filesystem        Inodes   IUsed     IFree IUse% Mounted on
  /dev/sdc1         270480  270328       152  100% /home/alexis/STORE
  /dev/sda1      215387968  400253 214987715    1% /home/alexis/OTHER

The sdc1/STORE file-system has 270,480 inodes on it, and you've used them all. That's why you're getting out of space warnings.

Why does STORE have much fewer inodes than OTHER?

The only structural difference between the two is the sector-size. Which shouldn't matter, since both volumes use a 4096b block-size. The issue comes in with how XFS does its inode allocation. It's dynamic.

The answer is hidden in the question: Unable to create files on large XFS filesystem

The issue turns out to be in how XFS allocates inodes. Unlike most file systems, allocation happens dynamically as new files are created. However, unless you specify otherwise, inodes are limited to 32-bit values, which means that they must fit within the first terabyte of storage on the file system. So if you completely filled that first terabyte, and then you enlarge the disk, you would still be unable to create new files, since the inodes can't be created on the new space.

You may be better served using xfs_copy or xfs_dump/xfs_restore to copy the data over, and then pruning out the data you didn't want copied.

sysadmin1138
  • 135,853