CLASS="sect1" BGCOLOR="#FFFFFF" TEXT="#000000" LINK="#0000FF" VLINK="#840084" ALINK="#0000FF" >

A.2. Sharing LVM volumes

WarningLVM is not cluster aware
 

Be very careful doing this, LVM is not currently cluster-aware and it is very easy to lose all your data.

If you have a fibre-channel or shared-SCSI environment where more than one machine has physical access to a set of disks then you can use LVM to divide these disks up into logical volumes. If you want to share data you should really be looking at GFS or other cluster filesystems.

The key thing to remember when sharing volumes is that all the LVM administration must be done on one node only and that all other nodes must have LVM shut down before changing anything on the admin node. Then, when the changes have been made, it is necessary to run vgscan on the other nodes before reloading the volume groups. Also, unless you are running a cluster-aware filesystem (such as GFS) or application on the volume, only one node can mount each filesystem. It is up to you, as system administrator to enforce this, LVM will not stop you corrupting your data.

The startup sequence of each node is the same as for a single-node setup with

vgscan
vgchange -ay
        
in the startup scripts.

If you need to do any changes to the LVM metadata (regardless of whether it affects volumes mounted on other nodes) you must go through the following sequence. In the steps below ``admin node'' is any arbitrarily chosen node in the cluster.

Admin node                   Other nodes
----------                   -----------
                             Close all Logical volumes (umount)
                             vgchange -an
<make changes, eg lvextend>
                             vgscan
                             vgchange -ay
        

NoteVGs should be active on the admin node
 

You do not need to, nor should you, unload the VGs on the admin node, so this can be the node with the highest uptime requirement.

I'll say it again: Be very careful doing this