- Supported hardware
- 1. Installation
- 2. Warnings
- 3. Terminology
- 4. How to use
- 5. Assistance or feedback
- 6. Contribute
LibStorageMgmt is a vendor neutral library that provides an API and tools for managing SAN arrays and local hardware RAID adapters. This is a community open source project (LGPL 2.1+ license), providing, but not limited to, the following features:
- Storage system, pool, volume, filesystem, disk monitoring.
- Volume create, delete, resize, and mask.
- Volume snapshot, replication create and delete.
- NFS file system create, delete, resize and expose.
- Access group create, edit, and delete.
Server resources such as CPU and interconnect bandwidth are not utilized because the operations are all done on the array.
The package provides:
- Stable C and Python API for client application and plug-in developers.
- Command line interface that utilizes the library (lsmcli).
- Daemon that executes the plug-in in a separate process (lsmd).
- Simulator plug-in that allows the testing of client applications (sim).
- Plugin architecture for interfacing with arrays.
The libStorageMgmt uses a URI to identify which plugin should be used. The URI format is:
plugin://<username>@host:<port>/?<query string parameters>
plugin+ssl://<username>@host:<port>/?<query string parameters>
# All plugins except 'simc://' and 'sim://' support ssl encryption.
Supported hardware
Current plugins and supported storage products
Plugin | URI Syntax | Support Products |
---|---|---|
Simulator C | simc:// |
Only for development or testing client applications |
Simulator | sim:// |
Only for development or testing client applications |
ONTAP | ontap://<user>@<host> |
NetApp ONTAP |
SMI-S | smispy://<user>@<host> |
EMC VMAX/DMX/VNX/CX |
SMI-S | smispy://<user>@<host> |
NetApp ONTAP |
SMI-S | smispy://<user>@<host> |
IBM XIV/DS/SVC |
SMI-S | smispy://<user>@<host>?namespace=root/lsiarray13 |
NetApp E-Series |
SMI-S | smispy://<user>@<host> |
Huawei HVS |
SMI-S | smispy://<user>@<host> |
Other Array with SMI-S 1.4+ |
Targetd | targetd://<user>@<host> |
Linux Targetd |
Nstor | nstor://<user>@<host> |
NexentaStor 4.x/3.x |
LSI MegaRAID | megaraid:// |
LSI MegaRAID |
SMI-S | smispy://<user>@<host>?namespace=root/LsiMr13 |
LSI MegaRAID |
HP SmartArray | hpsa:// |
HP SmartArray |
1. Installation
The libStorageMgmt packages exist in RHEL 7 and Fedora repositories. EL6 support is available in fedora EPEL repository.
To install libStorageMgmt for use of the command line, required run-time libraries and simulator plug-ins use the following command:
$ sudo yum install libstoragemgmt
To develop C applications that utilize the library, install the libstoragemgmt-devel and, optionally the libstorage-debuginfo packages with the following command:
$ sudo yum install libstoragemgmt-devel libstoragemgmt-debuginfo
To install libStorageMgmt for use with hardware arrays, select one or more of the appropriate plug-in packages with the following command:
$ sudo yum install libstoragemgmt-smis-plugin \
libstoragemgmt-netapp-plugin \
libstoragemgmt-nstor-plugin \
libstoragemgmt-targetd-plugin \
libstoragemgmt-megaraid-plugin
Please refer to install_guide for detailed information on installation.
2. Warnings
This library and associated tools have the ability to destroy any and all data located on arrays that it manages. It is highly recommended to develop and test applications and scripts against the storage simulator plug-in to remove any logic errors before working with production systems. Testing applications and scripts on actual non-production hardware before deploying to production is strongly encouraged if possible.
3. Terminology
Term | Meaning | Synonym |
---|---|---|
System |
Represents a Storage Array or direct attached storage RAID. Examples
include:
|
None |
Pool | A group of storage space, typically File Systems or Volumes can be created from a Pool. | StoragePool (SNIA Terminology) |
Volume | Storage Area Network (SAN) Storage Arrays can expose a Volume to the Host Bus Adapter (HBA) over different transports (FC/iSCSI/FCoE/etc.) The host OS treats it as block devices (one volume can be exposed as many disks if multipath[2] is enabled). | LUN (Logical Unit Number), StorageVolume (SNIA Terminology), Virtual disk |
Filesystem | Network Attached Storage (NAS) Storage array can expose a Filesystem to host OS via IP network using NFS or CIFS protocol. The host OS treats it as a mountpoint or a folder containing files depending on client operating system. | None |
Disk | Physical disk holding the data. Pools typically consist of one or more disks. | DiskDrive (SNIA Terminology) |
Initiator | In fibre channel (FC) or fibre channel over ethernet (FCoE), Initiator is WWPN (World Wide Port Name)[3] or/and WWNN (World Wide Node Name). In iSCSI, Initiator is the IQN (iSCSI Qualified Name)[4]. In NFS or CIFS, Initiator is the host name or IP address of host. | None |
Access group | Collections of iSCSI/FC/FCoE initiators which are granted access to one or more Storage volumes. This ensures that only storage volumes are accessible by the specified initiator(s). | Initiator group (igroup), Host Group |
Volume Mask | Exposing a Volume to a specified Access Group. The libStorageMgmt library currently does not support logical unit masking with the ability to choose a specific logical unit number (LUN). The libStoragemgmt library lets the storage array select the next available LUN for assignment. Make sure you read the OS, Storage Array, or HBA documents if you are configuring boot from SAN or masking 256+ Volumes. Volumes are masked to all target ports. Futures versions of libStorageMgmt may allow you to specify specific target port. | LUN Mapping, LUN masking |
Volume Unmask | Reverse of volume mask | LUN unmap, LUN unmask |
Clone | Point in time read writeable space efficient copy of data | Read writeable snapshot |
Copy | Full bitwise copy of the data (occupies full space) | None |
Mirror SYNC | I/O will be blocked until I/O reached both source and target storage systems. There will be no data difference between source and target storage systems. | None |
Mirror ASYNC | I/O will be blocked until I/O reached source storage systems. The source storage system will use copy the changes data to target system in a predefined interval. There will be a small data differences between source and target. | None |
Snapshot | A point in time (PIT), read only, space efficient copy of a file system. | Read only snapshot |
Child dependency | Some arrays have an implicit relationship between the origin (parent Volume or File system) and the child (eg. Snapshot, Clone). For example you cannot delete the parent if it has one or more dependent children. The API provides methods to determine if any such relationship exists and a method to remove the dependency by replicating the required blocks. | None |
4. How to use
These are mandatory needs:
-
Required libStorageMgmt plugin installed.
-
libStorageMgmt daemon – lsmd started.
Normally, “systemctl start libstoragemgmt.service” will suffice.
-
Provide correct URI and password.
4.1. Command Line Tool Example
Please check manpage of lsmcli
for details.
$ sudo systemctl start libstoragemgmt.service
$ export LSMCLI_URI='sim://'
$ unset LSMCLI_PASSWORD
$ lsmcli list --type volumes
ID | Name | SCSI VPD 0x83 | Size ...
------------------------------------------------------------------------------ ...
VOL_ID_00000001 | Volume 000 | 54c71dff7388205e8694a837f12c290c | 214748364800 ...
VOL_ID_00000002 | Volume 001 | 88e35c747a4c3a92df2070b81e841c62 | 214748364800 ...
4.2. Python Code Example
Please refer to libStorageMgmt Python API guide for detail.
#!/usr/bin/python2
import lsm
# Make connection.
lsm_cli_obj = lsm.Client("sim://")
# Enumerate Storage Pools.
pools = lsm_cli_obj.pools()
# Use pool information.
for p in pools:
print 'pool name:', p.name, 'freespace:', p.free_space
# Close connection
if lsm_cli_obj is not None:
lsm_cli_obj.close()
print 'We closed'
4.3. C code Example
Please refer to LibStorageMgmt C API document for detail.
#include <stdio.h>
#include <libstoragemgmt/libstoragemgmt.h>
/*
* If you have the development library package installed
$ gcc -Wall client_example.c -lstoragemgmt -o client_example
*
* If building out of source tree
*
$ gcc -Wall -g -O0 client_example.c -I../c_binding/include/ \
-L../c_binding/.libs -lstoragemgmt -o client_example
*/
void error(char *msg, int rc, lsm_error *e)
{
if( rc ) {
printf("%s: error: %d\n", msg, rc);
if( e && lsm_error_message_get(e) ) {
printf("Msg: %s\n", lsm_error_message_get(e));
lsm_error_free(e);
}
}
}
void list_pools(lsm_connect *c)
{
lsm_pool **pools = NULL;
int rc = 0;
uint32_t count = 0;
rc = lsm_pool_list(c, NULL, NULL, &pools, &count, LSM_CLIENT_FLAG_RSVD);
if( LSM_ERR_OK == rc ) {
uint32_t i;
for( i = 0; i < count; ++i) {
printf("pool name: %s freespace: %"PRIu64"\n",
lsm_pool_name_get(pools[i]),
lsm_pool_free_space_get(pools[i]));
}
lsm_pool_record_array_free(pools, count);
} else {
error("Volume list", rc, lsm_error_last_get(c));
}
}
int main()
{
lsm_connect *c = NULL;
lsm_error *e = NULL;
int rc = 0;
const char *uri = "sim://";
rc = lsm_connect_password(uri, NULL, &c, 30000, &e, LSM_CLIENT_FLAG_RSVD);
if( LSM_ERR_OK == rc ) {
printf("We connected...\n");
list_pools(c);
rc = lsm_connect_close(c, LSM_CLIENT_FLAG_RSVD);
if( LSM_ERR_OK != rc ) {
error("Close", rc, lsm_error_last_get(c));
} else {
printf("We closed\n");
}
} else {
error("Connect", rc, e);
}
return rc;
}
5. Assistance or feedback
We’d love to hear from you.
For general questions please contact us via email or by creating an issue on the github repository.
- An email to libstoragemgmt-users@lists.fedorahosted.org
Bugs reports or suspected bug reports can be emailed to:
User mailing list archive page: https://lists.fedorahosted.org/mailman/listinfo/libstoragemgmt-users
Developer mailing list archive page: https://lists.fedorahosted.org/mailman/listinfo/libstoragemgmt-devel
6. Contribute
Please subscribe to “libstoragemgmt-devel@lists.fedorahosted.org” mailist: https://lists.fedorahosted.org/mailman/listinfo/libstoragemgmt-devel
For libStorageMgmt library code, please refer to Library Developer Guide.
For libStorageMgmt plugin code, please refer to C Plugin developer Guide or Python Plugin developer Guide.