Product SiteDocumentation Site

Red Hat Storage Software Appliance 3.2

3.2 Release Notes

Release Notes for Red Hat Storage Software Appliance

Edition 1

Red Hat Engineering Content Services


Legal Notice

Copyright © 2011 Red Hat Inc..
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
All other trademarks are the property of their respective owners.


1801 Varsity Drive
RaleighNC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701

Abstract
This Release Notes introduces Red Hat Storage Software Appliance, describes the minimum requirements, and provides information on downloading and installing the software in your environment.

Preface
1. License
2. Getting Help and Giving Feedback
2.1. Do You Need Help?
2.2. We Need Feedback!
1. Introducing Red Hat Storage Software Appliance
2. Key Features
3. System Requirements
4. Downloading and Installing Red Hat Storage Software Appliance
4.1. Downloading Red Hat Storage Software Appliance
4.2. Installing Red Hat Storage Software Appliance
5. Known Issues
6. Product Support
7. Product Documentation
A. Revision History

Preface

This Release Notes includes the following sections for the 3.2 release of the Red Hat Storage Software Appliance:

Note

It is recommended that you must thoroughly review this release notes prior to installing or migrating Red Hat Storage Software Appliance.

Important

Existing Gluster Storage Software Appliance users can migrate to Red Hat Storage Software Appliance. For step-by-step instructions on migrating to Red Hat Storage Software Appliance, see http://download.gluster.com/pub/gluster/RHSSA/3.2/Documentation/UG/html/chap-User_Guide-gssa_migrate.html.

1. License

The License information is available at www.redhat.com/licenses/rhel_rha_eula.html .

2. Getting Help and Giving Feedback

2.1. Do You Need Help?

If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal at http://access.redhat.com. Through the customer portal, you can:
  • search or browse through a knowledgebase of technical support articles about Red Hat products.
  • submit a support case to Red Hat Global Support Services (GSS).
  • access other product documentation.
Red Hat also hosts a large number of electronic mailing lists for discussion of Red Hat software and technology. You can find a list of publicly available mailing lists at https://www.redhat.com/mailman/listinfo. Click on the name of any mailing list to subscribe to that list or to access the list archives.

2.2. We Need Feedback!

If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/ against the product Documentation.
When submitting a bug report, be sure to mention the manual's identifier: Release_Notes
If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.

Chapter 1. Introducing Red Hat Storage Software Appliance

The Red Hat Storage Software Appliance 3.2 enables enterprises to treat physical storage as a virtualized, scalable, standardized, scale-on-demand, and centrally managed pool of storage. Enterprises, now have the capability to leverage storage resources the same way they have leveraged computing resources, radically improving storage economics in the process through the use of commodity storage hardware. The appliance's global namespace capability aggregates disk and memory resources into a unified storage volume that is abstracted from the physical hardware. It supports multi-tenancy by partitioning users or groups into logical volumes on shared storage and scales to petabytes of storage capacity.
Red Hat Storage Software Appliance is POSIX-compliant, hence the interface abstracts vendor APIs and the application need not be modified.
Hence, by scaling performance and capacity linearly, you can add capacity as required in matter of few minutes across a wide variety of workloads without affecting the performance. Storage can also be centrally managed across a wide variety of workloads enabling operations to more efficiently manage storage used for a variety of purposes.
The storage software appliance enables users to eliminate their dependence on high cost, difficulty in deployment, and manage monolithic storage arrays. With a storage software appliance, enterprises can now deploy commodity storage hardware and realize superior economics.
Red Hat Storage Software Appliance Architecture
Figure 1.1. Red Hat Storage Software Appliance Architecture

The heart of Red Hat Storage Software Appliance is GlusterFS; an open source distributed filesystem distinguished by multiple architectural differences, including a modular, stackable design and a unique, no-metadata server architecture. Eliminating the metadata server provides better performance, improved linear scalability, and increased reliability.

Chapter 2. Key Features

This section describes the key features available in Red Hat Storage Software Appliance. The following is a list of feature highlights of this new version of the Red Hat Storage Software Appliance:
  • Elastically scale storage with no downtime or application interruption
  • High Availability support via N-way replication
  • Scale availability, performance and capacity linearly and independently
  • No-metadata server eliminates performance bottleneck and ensures linear scalability
  • Utilization and performance monitoring, measuring and reporting
  • No changes to applications or management tools
  • Aggregate CPU, memory, network & disk resources
  • Scale-out capacity and performance as needed
  • For deployments that value scale-out architectures and speed

Chapter 3. System Requirements

Before you install Red Hat Storage Software Appliance, you must verify that your environment matches the minimum requirements described in this section.
General Configuration Considerations
The system must meet the following general requirements:
  • Centralized time servers are available (required in clustered environments)
    For example, ntpd - Network Time Protocol (NTP) Daemon
File System Requirements
Red Hat recommends XFS when formatting the disk sub-system. XFS supports metadata journaling, which facilitates quicker crash recovery. The XFS file system can also be defragmented and enlarged while mounted and active.
For existing Gluster Storage Software Appliance customers who are upgrading to Red Hat Storage Software Appliance, Ext3 and Ext4 file systems is supported.
Cluster Requirements
  • Minimum of four SSA nodes and maximum of 64 SSA nodes.
    Larger configurations supportable, but require an exception.
  • Initial cluster deployment can be of heterogeneous nodes.
  • Configuration upgrades can support a mixture of node sizes.
    For example, adding node with 4TB drives to a cluster with 2TB drives. However, in replicated configuration nodes must be added in pairs such that a node and it's replica are the same size.
    Depending on whether the cluster is used for High Performace Computing (HPC), General Purpose, or Archival, the table below lists the supported configurations.
Component HPC General Purpose Archival
Chassis (only applicable with SuperMicro) 2u 24x2.5" Hotswap with redundant power 2u 12x3.5" Hotswap with redundant power 4u 36x3.5" Hotswap with redundant power
Processor Dual Socket Hexacore Xeon Dual Socket Hexacore Xeon Dual Socket Hexacore Core Xeon
Disk 24x 2.5" 15K RPM SAS 12x 3.5" or 24x 2.5" SFF 6gb/s SAS 36x 3.5" 3gb/s SATA II
minimum RAM 48 GB 32 GB 16 GB
Networking 2x10 GigE 2x10 GigE (preferred) or 2x1GigE 2x10 GigE (preferred) or 2x1 GigE
Max # of JBOD attachments 0 2 4
Supported Dell Model R510 R510 R510
Supported HP Model DL-180, DL-370, DL-380 DL-180, DL-370, DL-380 DL-180
JBOD Support NA Dell MD-1200, HP D-2600, HP D-2700 Dell MD-1200, HP D-2600, HP D-2700

Note

The boot device must be 1.4 GB or higher.
All data disks are configured in groups of 12 drives in RAID6 configuration. Infiniband support on exception basis only.
Networking Requirements
Verify either of the following:
  • Gigabit Ethernet
  • 10 Gigabit Ethernet
Compatible Hardware
For successful installation of Red Hat Storage Software Appliance 3.2, you must select your hardware from the Supported Dell, Supported HP, or Supported SuperMicro list of models.
Dell Supported Configurations
Table 3.1. Dell Supported Configurations
Component Recommended Supported Unsupported
Chassis Redundant power configuration R510, R710 (Intel® 5520 Chipset) All other Dell models by exception only
Processor
Dual Six- core processors:
  • Intel® Xeon® X5690 - 3.46GHz
  • Intel® Xeon® X5680 - 3.33GHz
  • Intel® Xeon® X5675 - 3.06GHz
  • Intel® Xeon® X5660 - 2.80GHz
  • Intel® Xeon® X5650 - 2.66GHz
  • Intel® Xeon® E5649 - 2.53GHz
  • Intel® Xeon® E5645 - 2.40GHz
  • Intel® Xeon® L5640 - 2.26GHz
    (also any faster versions of six-core processors)
Unsupported processors:
  • Quad-core processors
  • Single socket configurations
  • AMD based servers
Memory 32GB 24GB Min, 64GB Max
NIC
RAID PERC 6/E SAS 1gb/512, PERC H800 1gb/512 Dell single channel ultra SCSI
System Disk 2x200GB Min (mirrored) 7.2K or 10/15
Data Disk
  • SSD
  • SFF Drives

HP Supported Configurations
Table 3.2. HP Supported Configurations
Component Recommended Supported Unsupported
Chassis
  • Either Model
  • Redundant power configuration
DL-180 G6, DL-370 G7, DL-380 G7 (Intel® 5520 Chipset) All other HP models by exception only
Processor Dual Six-core processors
  • Intel® Xeon® X5690 - 3.46GHz
  • Intel® Xeon® X5680 - 3.33GHz
  • Intel® Xeon® X5675 - 3.06GHz
  • Intel® Xeon® X5660 - 2.80GHz
  • Intel® Xeon® X5650 - 2.66GHz
  • Intel® Xeon® E5649 - 2.53GHz
  • Intel® Xeon® E5645 - 2.40GHz
  • Intel® Xeon® L5640 - 2.26GHz
    (also any faster versions of six-core processors)
  • Quad-core processors
  • Single socket configurations
  • AMD based servers
Memory 32GB 16GB Min, 128GB Max
NIC
RAID
  • HP Smart Array P410/512 with FBWC
  • Smart Array Advanced Pack (SAAP)
  • HP Smart Array P410/256 with FBWC or
  • HP Smart Array P410/512 with FBWC
  • Smart Array Advanced Pack (SAAP)
  • HP Smart Array B110i
  • HP Smart Array P212
  • HP Smart Array P410 with BBWC
System Disk
  • 2x
  • 200GB Min (mirrored) 7.2K or 10/15
Data Disk
  • SSD
  • SFF Drives

Chapter 4. Downloading and Installing Red Hat Storage Software Appliance

This chapter provides information on downloading and installing the Red Hat Storage Software Appliance.

4.1. Downloading Red Hat Storage Software Appliance

You can download the latest Red Hat Storage Software Appliance from https://access.redhat.com.

4.2. Installing Red Hat Storage Software Appliance

You can install Red Hat Storage Software Appliance using an USB stick, an iso image, or boot over PXE.

Chapter 5. Known Issues

The following are the known issues:
  • Issues related to Distributed Replicated Volumes:
    • When process has done cd into a directory, stat of deleted file recreates it (directory self- heal not triggered).
      In GlusterFS replicated setup, if you are inside a directory (for example, Test directory) of replicated volume. From another node, you will delete a file inside Test directory. Then if you perform stat operation on the same file name, the file will be automatically created. (that is, a proper directory self-heal is not triggered when process has done cd into a path).
    • Open fd self-heal blocks the I/O on fd.
      While doing self-heal on open file descriptors in replicate, the I/O operations on that particular file descriptor may get blocked.
  • Issues related to Distributed Volumes:
    • Rebalance does not happen if bricks are down.
      Currently while running rebalance, make sure all the bricks are in operating or connected state.
    • Rebalance can happen to already filled subvolume.
      Current algorithm of rebalance is not considering the free-space in the target brick before migrating data. This enhancement is under development and will be available shortly.
  • There may be minor I/O glitches when Rebalance operation is performed. The live rebalance feature will be available in upcoming 3.3.x releases. It is recommended to perform rebalance operation when there are no critical IO operations are happening.
  • glusterfsd - Error return code is not proper after daemonizing the process.
    Due to this, scripts that mount glusterfs or start glusterfs process must not depend on its return value.
  • glusterd - Parallel rebalance
    With the current rebalance mechanism, the machine issuing the rebalance is becoming a bottleneck as all the data migrations are happening through that machine.
  • Parallel operations (add brick, remove brick, and so on) with CLI from different nodes can crash glusterd.
  • After # gluster volume replace-brick VOLNAME Brick New-Brick commit command is issued, the file system operations on that particular volume, which are in transit will fail.
  • Command # gluster volume replace-brick ... will fail in a RDMA set up.
  • If files and directories have different GFIDs on different backends, GlusterFS client may hang or display errors.
    Work Around: The workaround for this issue is explained at http://gluster.org/pipermail/gluster-users/2011-July/008215.html .
  • Downgrading from 3.2.x to 3.1.x
    If you are using 3.2.x, then the new features are enabled in the default volume files (i.e. new translators). So after the downgrade, old versions fail to understand the new options/ translators and fail to start.
    Work Around: Before starting downgrade procedure, run the following commands:
    # gluster volume reset VOLNAME force
    # gluster volume geo-replication stop MASTER SLAVE
    Now you can downgrade to 3.1.x.
    Run any parameter changing operations on the volume. For example, operations like # gluster volume set VOLNAME read-ahead off and # gluster volume set VOLNAME read-ahead on.
  • Issues related to Directory Quota:
    • Some writes can appear to pass even though the quota limit is exceeded (write returns success). This is because they could be cached in write-behind. However disk-space would not exceed the quota limit, since when writes to backend happen, quota does not allow them. Hence it is advised that applications should check for return value of close call.
    • If a user has done cd into a directory on which the administrator is setting the limit, even though the command succeeds and the new limit value will be applicable to all the users except for those users’ who has done cd in to that particular directory. The old limit value will be applicable until the user has cd out of that directory.
    • Rename operation (that is, removing oldpath and creating newpath) requires additional disk space equal to file size. This is because, during rename, it subtracts the size on oldpath after rename operation is performed, but it checks whether quota limit is exceeded on parents of newfile before rename operation.
    • With striped volumes, Quota feature is not available.
  • Issues related to POSIX ACLs:
    • Even though POSIX ACLs are set on the file or directory, the + (plus) sign in the file permissions will not be displayed. This is for performance optimization and will be fixed in a future release.
    • When glusterfs is mounted with -o acl, directory read performance can be bad. Commands like recursive directory listing can be slower than normal.
    • When POSIX ACLs are set and multiple NFS clients are used, there could be inconsistency in the way ACLs are applied due to attribute caching in NFS. For a consistent view of POSIX ACLs in a multiple client setup, use -o noac option on NFS mount to switch off attribute caching. This could have a performance impact on operations involving attributes.
  • The following are few known missing (minor) features:
    • locks - mandatory locking is not supported.
    • NLM (Network Lock Manager) is not supported.

Chapter 6. Product Support

You can reach support at http://www.redhat.com/support.

Chapter 7. Product Documentation

Product documentation of Red Hat Storage Software Appliance is available at http://www.gluster.com/community/documentation/index.php/Main_Page .

Revision History

Revision History
Revision 1-9Mon Dec 20 2011Divya Muntimadugu
Updated Release Notes for File System updates.
Revision 1-1Fri Nov 18 2011Daniel Macpherson
Transfer of book to Red Hat Documentation site