[parted-devel] [PATCH][RFC] Print maximum size of GPT

Brian C. Lane bcl at redhat.com
Fri Mar 13 17:48:13 GMT 2020


On Thu, Mar 12, 2020 at 04:08:21PM -0700, John Gilmore wrote:
> Is there some good reason why Parted has limits on the size of a GPT
> that it will manipulate?  Or is this just an artifact of an
> implementation that didn't dynamically allocate enough of its data
> structures?  Why didn't the patch seek to eliminate those limits, rather
> than merely reporting the limits?

The problem is you need to reserve the space, so I'm assuming the
original authors thought that 128 partitions was just fine (and it is,
this wasn't actually a GPT problem that was reported).

> The GNU Coding Standards say: "Avoid arbitrary limits on the length or
> number of any data structure, including file names, lines, files, and
> symbols, by allocating all data structures dynamically. In most Unix
> utilities, “long lines are silently truncated”. This is not acceptable
> in a GNU utility."  See:
> 
>   https://www.gnu.org/prep/standards/standards.html#Semantics
> 
> The GPT is defined by the UEFI specification, here:
> 
>   https://uefi.org/specifications
>   https://uefi.org/sites/default/files/resources/UEFI_Spec_2_8_A_Feb14.pdf
> 
> That spec says that the *minimum* byte count of the Partition Entry Array
> is 16,384 bytes, and the size of each entry is a minimum of 128 bytes.
> There are no maximums specified.
> 
> I looked at the GPT code in parted-3.3.  The code for reading GPTs is
> quite general, though I noticed a bug in
> gpt_get_max_supported_partition_count in which it assumes a sector size
> of 512, by setting FirstUsableLBA to "34" when the header isn't valid.
> But that's a static function that is never called??!  (And it bases
> its operations on an ancient Apple document rather than on the UEFI
> spec, calls the zeroth PTE "1", and in general seems carelessly written.)

Actually it is, some of the disklabel 'ops' are setup by a macro -
PT_op_function_initializers which can make things look like they are
unused.

Looks like that function needs to be updated for larger block sizes, but
in practice I don't think that code will ever get hit, it's inside a
check for an invalid gpt header.

> But when writing or creating GPTs, Parted appears to assume that the
> "default" size of a GPT (what the spec calls a "minimum") is the ONLY
> possible size of a GPT that it can create.  It also assumes that each
> GPT entry can only be the default of 128 bytes long.  E.g. in
> _generate_header it uses sizeof (GuidPartitionEntry_t) in several
> places, ditto in gpt_write, where it also assumes that the Partition
> Entry Array starts in literal sector "2" rather than using the value
> from the Partition Table Header.  When processing a GPT made on some
> other system with larger entry sizes, it looks like this code would
> truncate all the entries to 128 bytes each.  In fact, the internal
> _GptDiskData structure doesn't even have a field to store the partition
> table entry size!

The 128 is the length of each entry, not of the table which is 128*128.

Thanks for pointing out these issues. Maybe someone would like to tackle
improving things and including tests for unusual cases?

Brian

-- 
Brian C. Lane (PST8PDT) - weldr.io - lorax - parted - pykickstart




More information about the parted-devel mailing list