6

The GPT (GUID Partition Table) is the most accepted modern standard for partitioning of a data storage device. Its unit for the offset and size of a partition is so-called sector*. Is it a well-defined unit? No, it isn’t. Firstly, one can switch a storage device from the “512-byte logical sectors” mode to the 4096-byte sectors and back using ATA or NVMe commands [2]. Secondly, the same device can be connected to its host via various interfaces, possibly performing a sort of native-to-logical sector translation. Let alone yet another scenario: a byte-for-byte backup of a 4096- or 2048-byte-sector medium to, say, a 512-byte-sector HDD.

But what do we see in the GPT header [3]? Signature, Header size, Current and backup LBAs, First and last usable LBAs for partitions, Disk GUID, Starting LBA of partition entries, Number of partition entries, Size of a partition entry, CRC32… all the stuff except for the sector size, which is critical for interpretation of the following partition data.

The GPT record [4] at LBA 0 (historically the master boot sector) has it neither, although there is enough room for any information the developer could deem helpful.

Yes, a skilled programmer can observe that the “sector size” can be inferred as the actual offset of a header (in octets or so) divided by its “Current LBA” value. Is it enough for sane error checking? Unlikely, because several “GPT headers” can be present on the same medium, such as one at 0x200 assuming 512-byte sectors and another at 0x1000 assuming 4096-byte sectors; both claiming to reside at LBA 1. can’t invent a plausible story how such disposition would lead to a trouble just now, but see no reason why such a possibility should be discounted.

Do you see any rationale behind omission of the sector size in any of the GUID Partition Table structures? Especially, at certain (fixed) offset on the medium, because specifying it only in the GPT header (having variable offset, counted in octets) has no merit, as explained above. Such possible rationale as a reliable and relatively simple algorithm (to prevent confusion) usable by disk partitioning tools and OS kernels. Or should we see it as a stupid gaffe by the developers from c. 2000? Economy obviously couldn’t be a concern.
Those who deem that 512-byte sectors were universal during the (U)EFI development time: no, they weren’t. Random-access media having 2048-byte sectors were already developed [5] and commercially available no later than in 2001 [6], and not only CD-ROMs. One haven’t necessarily be a genius to foresee various interoperability problems.


Special clarification:
* Physical layout of the stored data is irrelevant to the essence of this question. We won’t look at anything beyond interfaces used (by UEFI or OSes) to access the data.

Doc Brown
  • 218,378
Incnis Mrsi
  • 169
  • 5

2 Answers2

4

A software layer which can read and write the GPT has to use low-level ATA/ATAPI commands for it. Hence, it can simply ask the device for its logical sector size using the ATA command "IDENTIFY DEVICE". There is simply no need to store the sector size redundantly on a higher logical layer.

Changing the logical sector size of a storage device afterwards will usually invalidate the whole data layout as well as the GPT itself. This is an operation which has to be done once before using a device, and then never again - until the whole device gets erased completely and prepared for usage in a different environment. Hence a user of the GPT can safely assume the sector size reported by the device to be constant during the life time of the GPT.

Of course, Intel (who developed the GPT standard) could have made the decision to store the size redundantly on the GPT level, and today we can only speculate about why they designed it the way it looks today. My best guess here is that they decided to keep a strict separation of responsibilities of the different layers - sector size belongs to the device layer, partition layout (measured by number of sectors) to the GPT layer. Storing the sector size in the GPT breaks the idea of having a single source of truth for this crucial piece of information.

Doc Brown
  • 218,378
-2

Your drive is an array of sectors, numbered from 0 to (end of drive - 1). The sector size can be 512 or 4096 bytes, we don’t know. How your hardware accesses data is irrelevant.

Sector 0 is cleverly designed so that old master-boot-record software keeps its dirty fingers away from the drive and doesn’t touch it as nd damaged it. Gpt software ignores it.

Sector 1 contains the partition table header. Except you don’t know the sector size so you dont know where it starts. You may assume it starts at byte 512 or 4096 (or maybe 1024 or 2048 or 8192?). If you assume that the fake master-boot-record in sector 0 does not contain the signature “EFI PART” then you read from offset 512 and 4096 (and possibly other small powers of two) until you find “EFI PART” and current sector = 1, and the read offset is your sector size.

Storing the sector size outside sector 0 would not make a difference except giving you some extra redundancy. I would have stored it, I love redundancy:-)

gnasher729
  • 49,096