Figure below Shows the main terms used in Logical Volume Manager software.
This a list of volume managers that I collected searching the web:
XLV is SGI volume manager that is integrated in XFS (SGI ﬁle system).
From above URL:
The xlv volume manager (XLV) is an integral part of the XFS
ﬁlesystem(1). The volume manager provides an operational interface to
the system’s disks and isolates the higher layers of the ﬁlesystem and
applications from the details of the hardware. Essentially, higher-level
software ”sees” the logical volumes created by XLV exactly like disks.
Yet, a logical volume is a faster, more reliable ”disk” made from
many physical disks providing important features such as the following
(discussed in detail later):
- concatenating volumes for a larger disk
- striping volumes for a larger disk with more bandwidth
- plexing (mirroring) volumes for a more reliable disk
The use of volumes enables XFS to create ﬁlesystems or raw devices
that span more than one disk partition. These volumes behave like
regular disk partitions and appear as block and character devices in the
/dev directory. Filesystems, databases, and other applications access the
volumes rather than the partitions. Each volume can be used as a single
ﬁlesystem or as a raw partition. A logical volume might include partitions
from several physical disks and, thus, be larger than any of the physical
Filesystems built on these volumes can be created, mounted, and used
in the normal way.
The volume manager stores all conﬁguration data in the disk’s labels. These labels are stored on each disk and will be replicated so that a logical volume can be assembled even if some pieces are missing. There is a negligible performance penalty for using XLV when compared to accessing the disk directly; although plexing (mirroring data) will mildly degrade write performance.
From http://linux.msede.com/lvm/ by Heinz Mauelshagen. Above URL contains good presentations and more documentation on LVM. This is based originally on the OSF LVM.
From above URL
The Logical Volume Manager adds an additional layer between the physical devices and the block I/O interface in the kernel to allow a logical view on storage. Unlike current partition schemes where disks are divided into ﬁxed-sized continuous partitions, LVM allows the user to consider disks, also known as physical volumes (PV), as a pool (or volume) of data storage, consisting of equal-sized extents.
A LVM system consists of arbitrary groups of physical volumes, organized into volume groups (VG). A volume group can consist of one or more physical volumes. There can be more than one volume group in the system. Once created, the volume group, and not the disk, is the basic unit of data storage (think of it as a virtual disk consisting of one or more physical disks).
The pool of disk space that is represented by a volume group can be apportioned into virtual partitions, called logical volumes (LV) of various sizes. A logical volume can span a number of physical volumes or represent only a portion of one physical volume.
The size of a logical volume is determined by its number of extents. Once created, logical volumes can be used like regular disk partitions - to create a ﬁle system or as a swap device.
LVM was initially developed by IBM and subsequently adopted by the OSF (now OpenGroup) for their OSF/1 operating system. The OSF version was then used as a base for the HP-UX and Digital UNIX operating system LVM implementations. Another LVM implementation is available from Veritas which works diﬀerently. The Linux implementation is similar to the HP-UX LVM implementation.
Through the support of RAID redundancy techniques, VERITAS Volume Manager software helps protect against disk and hardware failures, while providing the ﬂexibility to extend the capabilities of existing hardware. By providing a logical volume management layer, VERITAS Volume Manager overcomes the physical restriction imposed by hardware disk devices.
Data redundancy (RAID 0, 1, 0+1, 1+0, 5).
Dynamic multipathing (DMP) support.
Intuitive Java technology-based platform-independent graphical user interface.
Enables easy movement of data between nodes in a SAN environment.
More on Veritas volume manager are here http://www.veritas.com/us/products/volumemanager/ which contains number of white papers and data sheet on the product.
FreeBSD volume manager. From above URL
See also http://www.shub-internet.org/brad/FreeBSD/vinum.html
Cloning and Snapshot - SANworks Enterprise Volume Manager use can be optimized by selecting either cloning or snapshot, depending on the application. Snapshots are virtual copies and clones are physical copies. Snapshot is ideal for quick recovery. Both are ideal for non-disruptive backup.
Web-based application - SANworks Enterprise Volume Manager is accessible from any system that has a web browser.
Multi-platform support - SANworks Enterprise Volume Manager operates on the RAID Array 8000 (RA8000) and the Enterprise Storage Array 12000 (ESA12000) using HSG80 controllers in switch or hub conﬁgurations. SANworks Enterprise Volume Manager supports Windows NT V4.0, Windows 2000, Sun Solaris V2.6, 7 and Tru64 Unix V4.0F/G. Other platform support is to follow. SANworks Enterprise Volume Manager provides consistent storage management regardless of the platform.
Plug and Play with existing applications - Microsoft Exchange, StorageWorks Enterprise Backup Solution, VERITAS NetBackup, VERITAS Backup Exec, Legato NetWorker, CA ARCServeIT, Oracle, and Microsoft SQL and with plans to support other applications in the future.
Supports LAN-less backup - Backup data is isolated from the general purpose LAN, so there is no network performance degradation during backup. All volume movement is on the SAN.
FC-AL or Switched Fibre Channel topologies supported - Customers with either technology can take advantage of the features of EVM. Snapshots are available in switch conﬁgurations only.
From http://www.sun.co.uk/services/educational/catalog/courses/UK-ES-310.html and http://www.carumba.com/talk/veritas/volumemanager.shtml
In addition to its sophisticated mirroring capabilities, ptx/SVM also provides disk striping, disk concatenation, hot sparing, and on-line disk management. With on-line disk management, a system administrator can optimize disk performance by moving data between disks while the system is running.
Drives under ptx/SVM control can be dynamically resynchronized with one or more mirrored partners, independent of the disk controller, without taking the system oﬀ-line. ptx/SVM can also control the resynchronization rate, which can be set to minimize impact on performance or to minimize the time required to perform a resync operation.
ptx/SVM oﬀers users the advantages of open systems with its access to powerful and dynamic volume management tools.
ptx/SVM Highlights Disk Mirroring Data availability and integrity are increased with the continuous maintenance of up to 32 copies of critical data. ptx/SVM automatically uses these data mirrors in the event of a disk failure.
ptx/SVM adds greatly to system availability by allowing system administrators to dynamically create, remove, and allocate mirrors, as well as perform on-line resynchronization and snap-shot backups with a minimum impact on users.
If mirroring is used, depending on the layout, ptx/SVM may automatically divide the read load among all the mirrors, creating multiple read paths which can enhance system performance.
Disk Concatenation Disk concatenation allows a user to create logical volumes that can span multiple disks. Two or more physical disks or disk segments can be viewed as a single entity.
Disk Striping Striping allows portions of multiple disks to be viewed as a single logical entity. Striping improves performance by distributing the data of a heavily-used partition over several disks, thus increasing the number of heads available to read and write data.
Hot Sparing ptx/SVM allows the designation of dedicated disks as hot spares that are used to replicate mirrored data from disks that have failed, thus increasing the availability of mirrored data.
Disk groups ptx/SVM allows for the segregation of disks into logical groups called disk groups. Disk groups improve access to data objects by maintaining separate databases of data objects and allows for the ability to create up to 100,000 data objects providing the ability for a system to scale to very large disk farms. Disk groups can also be ”exported” from one system and ”imported” to another system simplifying test and staging environments.
On-Line Volume Management ptx/SVM provides an easy-to-use administrative interface that allows data to be moved among disks while the system is running. I/O performance can be optimized on-line by reorganizing the data volumes, and general administrative tasks can be performed on-line.
ptx/SVM also supports ”rolling upgrades” that allows administrators to upgrade ptx/SVM in clustered systems with minimal system downtime.
Java Based, Menu-Oriented, or Command Line User Interfaces ptx/SVM provides support for a Java based graphical user interface, a menu-driven interface and a conventional command line interface for system administrators.
Command Point SVM, Sequent’s port of Veritas’ new GUI Volume Manager Storage Administrator, performs three primary roles:
It provides top-down and detailed views of the ptx/SVM conﬁguration.
It reports many ptx/SVM error conditions, such as I/O errors and failure of the volume conﬁguration daemon.
Dirty Region Logging (DRL) DRL is a fast resynchronization mechanism for private storage. If a mirrored volume needs to be resynchronized due to a system crash, only the addresses with outstanding writes stored in the log need to be resynchronized.
Sequent Support Sequent oﬀers full product support for ptx/SVM, including a manual for system administrators and training classes speciﬁcally for ptx/SVM. Consulting services are also available to assist with particular system conﬁgurations and implementations.
SLVM is a mechanism that permits multiple systems in an MC/LockManager cluster to share (read/write) disk resources in the form of volume groups. The objective is a highly available system by providing direct access to disks from multiple nodes and by supporting mirrored disks, thereby eliminating single points of failure.
SLVM permits a two system cluster to have read/write access to a volume group by activating the volume group in shared mode.
SLVM is designed to be used only by specialized distributed applications (such as Oracle Parallel Server) that use raw access to disks, rather than going through a ﬁle system. The applications must provide their own concurrency control for their data, as well as transaction logging and recovery facilities, as appropriate. Applications that are not network aware, such as ﬁle systems, will not be supported on volume groups activated in shared mode.
SLVM requires services provided by MC/LockManager and thus only clusters that have MC/LockManager will be able to use shared activation.