[parted-devel] [PATCH][RFC] Print maximum size of GPT

John Gilmore gnu at toad.com
Thu Mar 12 23:08:21 GMT 2020


Is there some good reason why Parted has limits on the size of a GPT
that it will manipulate?  Or is this just an artifact of an
implementation that didn't dynamically allocate enough of its data
structures?  Why didn't the patch seek to eliminate those limits, rather
than merely reporting the limits?

The GNU Coding Standards say: "Avoid arbitrary limits on the length or
number of any data structure, including file names, lines, files, and
symbols, by allocating all data structures dynamically. In most Unix
utilities, “long lines are silently truncated”. This is not acceptable
in a GNU utility."  See:

  https://www.gnu.org/prep/standards/standards.html#Semantics

The GPT is defined by the UEFI specification, here:

  https://uefi.org/specifications
  https://uefi.org/sites/default/files/resources/UEFI_Spec_2_8_A_Feb14.pdf

That spec says that the *minimum* byte count of the Partition Entry Array
is 16,384 bytes, and the size of each entry is a minimum of 128 bytes.
There are no maximums specified.

I looked at the GPT code in parted-3.3.  The code for reading GPTs is
quite general, though I noticed a bug in
gpt_get_max_supported_partition_count in which it assumes a sector size
of 512, by setting FirstUsableLBA to "34" when the header isn't valid.
But that's a static function that is never called??!  (And it bases
its operations on an ancient Apple document rather than on the UEFI
spec, calls the zeroth PTE "1", and in general seems carelessly written.)

But when writing or creating GPTs, Parted appears to assume that the
"default" size of a GPT (what the spec calls a "minimum") is the ONLY
possible size of a GPT that it can create.  It also assumes that each
GPT entry can only be the default of 128 bytes long.  E.g. in
_generate_header it uses sizeof (GuidPartitionEntry_t) in several
places, ditto in gpt_write, where it also assumes that the Partition
Entry Array starts in literal sector "2" rather than using the value
from the Partition Table Header.  When processing a GPT made on some
other system with larger entry sizes, it looks like this code would
truncate all the entries to 128 bytes each.  In fact, the internal
_GptDiskData structure doesn't even have a field to store the partition
table entry size!

There seems to be no defined way to pass an argument to "mklabel gpt"
that would let the user set a number of partitions or a partition-entry
size.

Besides the option to create a new GPT with space for more partitions,
it should be possible for Parted to dynamically expand an existing one.
If a GPT is full and the user asks to create another partition, there is
no problem in principle with increasing the on-disk size of the GPT,
assuming that no existing partitions currently overlap it or its backup.
(Common partition alignment techniques tend to align on megabyte
boundaries, often leaving lots of available space.)  Parted would have
to adjust the First Usable LBA and the Last Usable LBA, change the
number of PTEs in the header, and zero out the sectors (and backup
sectors) that it has just allocated as part of the Partition Entry
Array.

Parted should be able to make or process a table of any size.  It
doesn't look like too hard a change.

	John
	



More information about the parted-devel mailing list