[Openstack-devel] Debstack instead of Devstack?

Thomas Goirand thomas at goirand.fr
Mon Feb 11 12:11:07 UTC 2013


On 02/11/2013 05:14 PM, Daniel Pocock wrote:
> On 08/02/13 18:25, Thomas Goirand wrote:
>>> From my initial impression of XCP: it appears to be more than just
>>> tools, even the disk images are different. Can VMs still run from raw
>>> LVs on the dom0?
>>>     
>> Yes, you would setup an LVM storage repository to do that.
>>
>>   
> 
> Just to clarify my question though: I meant can a user have the same
> type of LV that they have now with the old Xen?
> 
> E.g.
> 
> vg00
>   lv_hostA
>           part1  (/boot)
>           part2  (/)
>           part3  (/var)
>   lv_hostB
>           part1  (/boot)
>           part2  (/)
>           part3  (/var)
> 
> 
> where lv_hostA and lv_hostB each contain a partition table?
> 
> Or it is mandatory that a user must convert each of those existing LVs
> into an XVA file and import that into the SR?

I'm not sure. Please ask this question in the XAPI list:
xen-api at lists.xen.org <xen-api at lists.xen.org>

>> Well, you've been hit by the CentOS 5.x sickness of still running 2.6.18
>> kernels so many years after it was deprecated.
>>
>> In fact, you should *not* run /dev/sd[abcd]. This has been deprecated
>> long ago. It seems to me that you are still running with Lenny, because
>> with Squeeze, it doesn't work anymore. It should be renamed
>> /dev/xvd[abcd]. This was a requirement from guys from kernel.org so that
>> Xen could be accepted upstream. That's just a rename though, so it's
>> quite easy to fix if you have some old setups. Just rename in the xen
>> startup file, and in the /etc/fstab in your guests.
> One thing that is not clear from that explanation though - what about
> people who map a domU partition to a dom0 LV (without partition table in
> the underlying LV, and including a partition number in the cfg)?  Here
> are some examples that I'm thinking about:
> 
> All filesystems are mapped to separate dom0 LVs:
> 
> disk = ['phy:vg00/hostA_boot,xvda1,w', 'phy:vg00/hostA_root,xvda2,w',
> 'phy:vg00/hostA_var,xvda3,w', 'phy:vg00/hostA_swap,xvda4,w']
> 
> One filesystem was mapped to a separate dom0 LV:
> 
> disk = ['phy:vg00/hostB_disk,xvda,w', 'phy:vg00/hostB_opt,xvdb1,w']
> 
> Can XCP based domUs access those filesystems directly?

I'm not sure if, under XAPI, a domU can access this as partitions.
However, this makes very little difference. For example:

'phy:vg00/hostA_boot,xvda1,w', 'phy:vg00/hostA_root,xvda2,w',

can become:

'phy:vg00/hostA_boot,xvda,w', 'phy:vg00/hostA_root,xvdb,w',

and then you only need to change your /etc/fstab in the guest. I'm quite
sure you can assign as many block device as you like to a domU.

> Do they have to be imported into XVA files somehow?

That, I don't know. I'm not even sure how XCP stores things, I just used
an XCP ext storage repo, and let it do what it needed. Please ask this
to the XAPI list.

> Personally, I don't mind changing and migrating these things if
> necessary, I'd just like to add some comments to the README or wiki so
> people understand the world of XCP.

Please do when you find out!

>>> I saw one comment suggesting the Cinder needs it's own VG, is that
>>> true?
>>>     
>> Can you give a link to such a comment?
> 
> Here it refers to a dedicated VG named `cinder-volumes':
> http://docs.openstack.org/trunk/openstack-compute/install/apt/content/osfolubuntu-cinder.html

In which part of this documentation do you read that it absolutely has
to be a *dedicated* volume group? I don't see why it would be the case.
As much as I know, Cinder create LVs with a specific naming scheme,
which would hardly conflict with what you use already.

See /etc/cinder/cinder.conf:

volume_name_template = volume-%s
volume_group = cinder-volumes

This means that Cinder volumes will be created in the VG called
"cinder-volumes", and with names: volume-%s as template (eg: volume-
then concatenated to that, a cinder string identifying the LV for iSCSI
export).

This hardly conflict with using the same volume group for your system.

> One other storage-related issue that comes to mind: how to manage RAID
> for people who want checksums?  New filesystems (e.g. btrfs) need to see
> separate devices in order to provide checksum functionality.  In this
> case, should each physical disk be set up as a separate SR, and then
> each VM is given some space on each of the SRs?

You're mixing XCP and Openstack stuff here, I'm not sure I follow. As
much as I understand, it should be possible to use Cinder to provide
some devices to VMs, even if they run in XCP, in which case, it doesn't
involve at all XCP SRs.

> My own use cases involve servers that can do RAID1 for me, or they can
> give JBOD access to the volumes and let btrfs do the RAID.

I don't see why you want to use btrfs in the hosts. For Openstack, or
for XCP, I don't think it makes sense.

If that's for Cinder, then yes, just use RAID1, IMO.

If it's for Swift, then don't use any RAID, use JBOD, LVM (which will
automatically do stripping over multiple disks which improves
performances) and XFS. If one HDD fails, just reinstall the full machine
and wait until it syncs (it doesn't mater, as you normally always have 3
copies at the same time).

Cheers,

Thomas



More information about the Openstack-devel mailing list