[parted-devel] Trying to read ZFS-labelled disks with Parted
Matthew S. Harris
mharris312 at gmail.com
Tue Apr 17 04:48:04 UTC 2007
Hello,
I used the Parted bug reporting system to report this last week, but
because "View Tickets" consistently gives me a "Trac detected an
internal error" message, I strongly suspect my report was never
received.
I've been trying to use Parted 1.8.6 to read an EFI-labelled disk
created by ZFS on OpenSolaris, and I've run into a few difficulties.
I'd like to report what I have figured out:
GPT header parsing differences: It seems that Sun has interpreted the
HeaderSize field [3] differently than others have. Parted treats the
HeaderSize field as the length of all the defined fields (Microsoft's
documentation [1] even goes so far as to say that HeaderSize is always
92), but Sun takes HeaderSize to be the total size of the header
struct, including the reserved padding at the end [2], making the size
512. In fact the sole purpose of the HeaderSize field appears to be
for determining the CRC. Therefore I suggest the following changes in
libparted/labels/gpt.c:
pth_crc32: The range of bytes covered by the CRC should be taken from
the HeaderSize field. Change
crc32 = efi_crc32 (pth_raw, pth_get_size_static (dev));
to
crc32 = efi_crc32 (pth_raw, PED_LE32_TO_CPU (pth->HeaderSize));
_header_is_valid: Consider HeaderSize to be invalid not when it is
greater than 92 but when it is less than 92 or bigger than the block
size.
_parse_header: Don't print an error complaining about the Revision
field just because the HeaderSize value is unexpected. Verify the
HeaderSize value as described above, and complain about the Revision
if the Revision is wrong.
Now, here's what I've sort of figured out: When I run Parted with
these changes and try to run the "print" command, I get this warning:
Warning: Not all of the space available to /dev/sdb appears to be
used, you can
fix the GPT to use all of the space (an extra 5040 blocks) or
continue with the
current setting?
Fix/Ignore?
Linux definitely sees the disk as being exactly 5040 blocks longer
than OpenSolaris sees it. I have confirmed this several ways ("format
-e" and "dd skip=..." on OpenSolaris, and Parted and hdparm and dd on
Linux), and it's exactly the same on the two disks I've tried this on
(400GB SATA Seagate drives). The best explanation I've found is that
the disk has a 5040-block HPA [4] that Linux ignores [5].
Unfortunately, I haven't been able to prove it; each of the suggested
tools for detecting an HPA either doesn't find an HPA or gets an error
while checking. Still, I think this is most likely what's happening,
and in that case it's a tad dangerous for Parted's default behavior to
be to move the secondary GPT header.
And finally what I don't understand at all right now: After I get past
these hurdles, I get
Error: Can't have a partition outside the disk!
when _parse_part_entry (gpc.c:761) calls
ped_partition_new(fs_type=0x0, start=-1, end=-1) (disk.c:1080).
Comments? I can continue to run experiments if you'd like, or, if you
have a spare disk, you can do it yourself with the steps below. Sorry
for the long email. I hope this is helpful.
Matthew
----------
How to reproduce: If you have a spare disk, download the Nexenta alpha
6 install CD [6]. When it boots, press F2 to get a shell. Use
"format -e" to determine the OpenSolaris name for your disk, and then
create a ZFS pool on it (e.g., "zpool create foo c2d0"). Then type
"halt" to reboot, and go run Parted from Linux.
References:
[1] http://www.microsoft.com/technet/prodtechnol/windowsserver2003/library/TechRef/bdeda920-1f08-4683-9ffb-7b4b50df0b5a.mspx#w2k3tr_basic_how_fgkm
[2] I determined this by adding debugging output to Parted, but I can
provide links to the corresponding OpenSolaris code if you'd like.
[3] Chapter 11 of Version 1.10 from
http://www.intel.com/technology/efi/agree.htm
[4] http://en.wikipedia.org/wiki/Host_Protected_Area
[5] http://www.uwsg.iu.edu/hypermail/linux/kernel/0505.2/1691.html
[6] http://distrowatch.com/?distribution=nexenta
More information about the parted-devel
mailing list