[Openstack-devel] Debstack instead of Devstack?

Daniel Pocock daniel at pocock.com.au
Mon Feb 11 09:14:38 UTC 2013


On 08/02/13 18:25, Thomas Goirand wrote:
>> From my initial impression of XCP: it appears to be more than just
>> tools, even the disk images are different. Can VMs still run from raw
>> LVs on the dom0?
>>     
> Yes, you would setup an LVM storage repository to do that.
>
>   

Just to clarify my question though: I meant can a user have the same
type of LV that they have now with the old Xen?

E.g.

vg00
  lv_hostA
          part1  (/boot)
          part2  (/)
          part3  (/var)
  lv_hostB
          part1  (/boot)
          part2  (/)
          part3  (/var)


where lv_hostA and lv_hostB each contain a partition table?

Or it is mandatory that a user must convert each of those existing LVs
into an XVA file and import that into the SR?

>> I found a few mailing list posts about converting from xend/xm to XCP,
>> they mention things like converting the filesystems and also changing
>> device names to /dev/xvd[abcd] - can you make any Debian-specific
>> comments on this, particularly in the context of upgrades from lenny or
>> squeeze to wheezy?
>>     
> Well, you've been hit by the CentOS 5.x sickness of still running 2.6.18
> kernels so many years after it was deprecated.
>
> In fact, you should *not* run /dev/sd[abcd]. This has been deprecated
>   

I don't have any of those myself, but I have seen them in many examples
online, so I'm guessing there will be some people out there who did that.
> long ago. It seems to me that you are still running with Lenny, because
> with Squeeze, it doesn't work anymore. It should be renamed
> /dev/xvd[abcd]. This was a requirement from guys from kernel.org so that
> Xen could be accepted upstream. That's just a rename though, so it's
> quite easy to fix if you have some old setups. Just rename in the xen
> startup file, and in the /etc/fstab in your guests.
>
>   
One thing that is not clear from that explanation though - what about
people who map a domU partition to a dom0 LV (without partition table in
the underlying LV, and including a partition number in the cfg)?  Here
are some examples that I'm thinking about:

All filesystems are mapped to separate dom0 LVs:

disk = ['phy:vg00/hostA_boot,xvda1,w', 'phy:vg00/hostA_root,xvda2,w',
'phy:vg00/hostA_var,xvda3,w', 'phy:vg00/hostA_swap,xvda4,w']

One filesystem was mapped to a separate dom0 LV:

disk = ['phy:vg00/hostB_disk,xvda,w', 'phy:vg00/hostB_opt,xvdb1,w']

Can XCP based domUs access those filesystems directly?  Do they have to
be imported into XVA files somehow?

If direct access to those filesystems is not permitted by the XCP
paradigm, do you believe iSCSI is a valid workaround?

Personally, I don't mind changing and migrating these things if
necessary, I'd just like to add some comments to the README or wiki so
people understand the world of XCP.

> I have written about this in the Squeeze release notes. I'll let you
> search for it.
>
>   
>>> If you plan on using Cinder, make sure you do smart partitioning though.
>>> Cinder uses LVM, while compute and Glance will write images in /var,
>>> which needs to be big enough. So, with 4 disks, I'd go for LVM over
>>> RAID10, and have your /var on the LVM, so you can resize everything as
>>> you wish / as needed.
>>>       
>> I saw one comment suggesting the Cinder needs it's own VG, is that
>> true?
>>     
> Can you give a link to such a comment?
>
>   

Here it refers to a dedicated VG named `cinder-volumes':
http://docs.openstack.org/trunk/openstack-compute/install/apt/content/osfolubuntu-cinder.html

One other storage-related issue that comes to mind: how to manage RAID
for people who want checksums?  New filesystems (e.g. btrfs) need to see
separate devices in order to provide checksum functionality.  In this
case, should each physical disk be set up as a separate SR, and then
each VM is given some space on each of the SRs?

My own use cases involve servers that can do RAID1 for me, or they can
give JBOD access to the volumes and let btrfs do the RAID.





More information about the Openstack-devel mailing list