Edition 1
1801 Varsity Drive
Raleigh, NC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701
Mono-spaced Bold
To see the contents of the filemy_next_bestselling_novel
in your current working directory, enter thecat my_next_bestselling_novel
command at the shell prompt and press Enter to execute the command.
Press Enter to execute the command.Press Ctrl+Alt+F2 to switch to the first virtual terminal. Press Ctrl+Alt+F1 to return to your X-Windows session.
mono-spaced bold
. For example:
File-related classes includefilesystem
for file systems,file
for files, anddir
for directories. Each class has its own associated set of permissions.
Choose Mouse Preferences. In the Buttons tab, click the Left-handed mouse check box and click to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).→ → from the main menu bar to launchTo insert a special character into a gedit file, choose → → from the main menu bar. Next, choose → from the Character Map menu bar, type the name of the character in the Search field and click . The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the button. Now switch back to your document and choose → from the gedit menu bar.
Mono-spaced Bold Italic
or Proportional Bold Italic
To connect to a remote machine using ssh, typessh
at a shell prompt. If the remote machine isusername
@domain.name
example.com
and your username on that machine is john, typessh john@example.com
.Themount -o remount
command remounts the named file system. For example, to remount thefile-system
/home
file system, the command ismount -o remount /home
.To see the version of a currently installed package, use therpm -q
command. It will return a result as follows:package
.
package-version-release
Publican is a DocBook publishing system.
mono-spaced roman
and presented thus:
books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs
mono-spaced roman
but add syntax highlighting as follows:
package org.jboss.book.jca.ex1; import javax.naming.InitialContext; public class ExClient { public static void main(String args[]) throws Exception { InitialContext iniCtx = new InitialContext(); Object ref = iniCtx.lookup("EchoBean"); EchoHome home = (EchoHome) ref; Echo echo = home.create(); System.out.println("Created Echo"); System.out.println("Echo.echo('Hello') = " + echo.echo("Hello")); } }
ntpd
- Network Time Protocol (NTP) Daemon
Component | HPC | General Purpose | Archival |
---|---|---|---|
Chassis (only applicable with SuperMicro) | 2u 24x2.5" Hotswap with redundant power | 2u 12x3.5" Hotswap with redundant power | 4u 36x3.5" Hotswap with redundant power |
Processor | Dual Socket Hexacore Xeon | Dual Socket Hexacore Xeon | Dual Socket Hexacore Core Xeon |
Disk | 24x 2.5" 15K RPM SAS | 12x 3.5" or 24x 2.5" SFF 6gb/s SAS | 36x 3.5" 3gb/s SATA II |
minimum RAM | 48 GB | 32 GB | 16 GB |
Networking | 2x10 GigE | 2x10 GigE (preferred) or 2x1GigE | 2x10 GigE (preferred) or 2x1 GigE |
Max # of JBOD attachments | 0 | 2 | 4 |
Supported Dell Model | R510 | R510 | R510 |
Supported HP Model | DL-180, DL-370, DL-380 | DL-180, DL-370, DL-380 | DL-180 |
JBOD Support | NA | Dell MD-1200, HP D-2600, HP D-2700 | Dell MD-1200, HP D-2600, HP D-2700 |
Component | Recommended | Supported | Unsupported |
---|---|---|---|
Chassis | Redundant power configuration | R510, R710 (Intel® 5520 Chipset) | All other Dell models by exception only |
Processor |
Dual Six- core processors:
|
Unsupported processors:
| |
Memory | 32GB | 24GB Min, 64GB Max | |
NIC | |||
RAID | PERC 6/E SAS 1gb/512, PERC H800 1gb/512 | Dell single channel ultra SCSI | |
System Disk | 2x200GB Min (mirrored) 7.2K or 10/15 | ||
Data Disk |
|
Component | Recommended | Supported | Unsupported |
---|---|---|---|
Chassis |
| DL-180 G6, DL-370 G7, DL-380 G7 (Intel® 5520 Chipset) | All other HP models by exception only |
Processor |
Dual Six-core processors
|
| |
Memory | 32GB | 16GB Min, 128GB Max | |
NIC | |||
RAID |
|
|
|
System Disk |
| ||
Data Disk |
|
.iso
image. In the Type field, choose USB Drive.
.iso
image to the USB stick using the following command:
# cp iso_filename
/media/USB/
# rhn_register
# yum update
# gluster volume stop VOLNAME
# /etc/init.d/glusterd stop
# service glusterd stop
/etc/
directory, especially /etc/glusterd
.
glusterd
. Hence stop the gluster service using the following command:
# service glusterd stop
# rhn_register
/etc/gluserd
to a temporary location using the following command:
# mv /etc/glusterd
to /tmp/etc/glusterd.old
/etc/glusterd
directory from the backup (refer Step 2) to your new installation.
# mount /data/disk
/etc/fstab
to mount data disks at the same mount point as before.
# mount -a
/etc/
, you must update the system configuration files into your new installation.
glusterd
management daemon using the following command:
# service glusterd start
glusterfsd
process using the following command:
# gluster volume start volname
force
# /etc/init.d/glusterd start
# /etc/init.d/glusterd stop
# chkconfig glusterd on
# gluster peer command
# gluster peer status
# gluster
gluster> command
gluster
gluster > peer status
# gluster peer probe server
# gluster peer probe server2 Probe successful # gluster peer probe server3 Probe successful # gluster peer probe server4 Probe successful
# gluster peer status Number of Peers: 3 Hostname: server2 Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5 State: Peer in Cluster (Connected) Hostname: server3 Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 State: Peer in Cluster (Connected) Hostname: server4 Uuid: 3e0caba-9df7-4f66-8e5d-cbc348f29ff7 State: Peer in Cluster (Connected)
# gluster peer detach server
# gluster peer detach server4 Detach successful
# gluster volume create NEW-VOLNAME
[stripe COUNT
| replica COUNT
] [transport tcp | rdma | tcp, rdma] NEW-BRICK1 NEW-BRICK2 NEW-BRICK3...
# gluster volume create test-volume server3:/exp3 server4:/exp4 Creation of test-volume has been successful Please start the volume to access data.
# gluster volume create NEW-VOLNAME
[transport tcp | rdma | tcp,rdma] NEW-BRICK...
# gluster volume create test-volume server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 Creation of test-volume has been successful Please start the volume to access data.
# gluster volume info Volume Name: test-volume Type: Distribute Status: Created Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: server1:/exp1 Brick2: server2:/exp2 Brick3: server3:/exp3 Brick4: server4:/exp4
# gluster volume create test-volume transport rdma server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 Creation of test-volume has been successful Please start the volume to access data.
# gluster volume set test-volume auth.allow 10.*
# gluster volume info Volume Name: test-volume Type: Distribute Status: Created Number of Bricks: 4 Transport-type: rdma Bricks: Brick1: server1:/exp1 Brick2: server2:/exp2 Brick3: server3:/exp3 Brick4: server4:/exp4
# gluster volume create NEW-VOLNAME
[replica COUNT
] [transport tcp | rdma tcp,rdma] NEW-BRICK...
# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 Creation of test-volume has been successful Please start the volume to access data.
# gluster volume set test-volume auth.allow 10.*
# gluster volume create NEW-VOLNAME
[stripe COUNT
] [transport tcp | rdma | tcp,rdma] NEW-BRICK...
# gluster volume create test-volume stripe 2 transport tcp server1:/exp1 server2:/exp2 Creation of test-volume has been successful Please start the volume to access data.
# gluster volume set test-volume auth.allow 10.*
# gluster volume create NEW-VOLNAME
[stripe COUNT
] [transport tcp | rdma | tcp,rdma] NEW-BRICK...
# gluster volume create test-volume stripe 4 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 Creation of test-volume has been successful Please start the volume to access data.
# gluster volume create test-volume stripe 4 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8 Creation of test-volume has been successful Please start the volume to access data.
# gluster volume set test-volume auth.allow 10.*
# gluster volume create NEW-VOLNAME
[replica COUNT
] [transport tcp | rdma | tcp,rdma] NEW-BRICK...
# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 Creation of test-volume has been successful Please start the volume to access data.
# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 Creation of test-volume has been successful Please start the volume to access data.
# gluster volume set test-volume auth.allow 10.*
# gluster volume start VOLNAME
# gluster volume start test-volume Starting test-volume has been successful
# gluster volume info
VOLNAME
# gluster volume info test-volume Volume Name: test-volume Type: Distribute Status: Created Number of Bricks: 4 Bricks: Brick1: server1:/exp1 Brick2: server2:/exp2 Brick3: server3:/exp3 Brick4: server4:/exp4
# gluster volume info all
# gluster volume info all Volume Name: test-volume Type: Distribute Status: Created Number of Bricks: 4 Bricks: Brick1: server1:/exp1 Brick2: server2:/exp2 Brick3: server3:/exp3 Brick4: server4:/exp4 Volume Name: mirror Type: Distributed-Replicate Status: Started Number of Bricks: 2 X 2 = 4 Bricks: Brick1: server1:/brick1 Brick2: server2:/brick2 Brick3: server3:/brick3 Brick4: server4:/brick4 Volume Name: Vol Type: Distribute Status: Started Number of Bricks: 1 Bricks: Brick: server:/brick6
# modprobe fuse
# dmesg | grep -i fuse
fuse init (API version 7.13)
$ sudo yum -y install openssh-server wget fuse fuse-libs openib libibverbs
$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24008 -j ACCEPT
$ sudo iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 24009:24014 -j ACCEPT
$ md5sum RPM_file.rpm
$ sudo rpm -Uvh core_RPM_file
$ sudo rpm -Uvh fuse_RPM_file
$ sudo rpm -Uvh rdma_RPM_file
$ sudo rpm -Uvh glusterfs-core-3.2.X.x86_64.rpm
$ sudo rpm -Uvh glusterfs-fuse-3.2.X.x86_64.rpm
$ sudo rpm -Uvh glusterfs-rdma-3.2.X.x86_64.rpm
# mkdir glusterfs
# cd glusterfs
# tar -xvzf SOURCE-FILE
# ./configure
GlusterFS configure summary
==================
FUSE client : yes
Infiniband verbs : yes
epoll IO multiplex : yes
argp-standalone : no
fusermount : no
readline : yes
# make
# make install
# glusterfs –-version
/etc/hosts
entries or DNS server to resolve server names to IP addresses.
# mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR
# mount -t glusterfs server1:/test-volume /mnt/glusterfs
mount -t glusterfs
command. Note that you need to separate all options with commas.
# mount -t glusterfs -o log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs
HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev 0 0
server1:/test-volume /mnt/glusterfs glusterfs defaults,_netdev 0 0
HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev,log-level=WARNING,log-file=/var/log/gluster.log 0 0
# mount
server1:/test-volume on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072
# df
# df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs
# cd MOUNTDIR
# ls
# cd /mnt/glusterfs
# ls
# mount -t nfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR
# mount -t nfs server1:/test-volume /mnt/glusterfs
requested NFS version or transport protocol is not supported
.
-o mountproto=tcp
# mount -o mountproto=tcp -t nfs server1:/test-volume /mnt/glusterfs
# mount -o proto=tcp,vers=3 nfs://HOSTNAME-OR-IPADDRESS:38467/VOLNAME MOUNTDIR
# mount -o proto=tcp,vers=3 nfs://server1:38467/test-volume /mnt/glusterfs
HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR nfs defaults,_netdev 0 0
server1:/test-volume /mnt/glusterfs nfs defaults,_netdev 0 0
requested NFS version or transport protocol is not supported.
HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR nfs defaults,_netdev,mountproto=tcp 0 0
server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0
# mount
server1:/test-volume on /mnt/glusterfs type nfs.glusterfs (rw,allow_other,default_permissions,max_read=131072)
# df
# df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs
# cd MOUNTDIR
# ls
# cd /mnt/glusterfs
# ls
\\SERVERNAME\VOLNAME
\\server1\test-volume
# gluster volume set VOLNAME OPTION PARAMETER
# gluster volume set test-volume performance.cache-size 256MB Set volume successful
Option | Description | Default Value | Available Options |
---|---|---|---|
auth.allow | IP addresses/Host name of the clients which should be allowed to access the the volume. | * (allow all) | Valid IP address which includes wild card patterns including *, such as 192.168.1.* |
auth.reject | IP addresses/Host name of the clients which should be denied to access the volume. | NONE (reject none) | |
cluster.self-heal-window-size | Specifies the maximum number of blocks per file on which self-heal would happen simultaneously. | 16 | 0 < data-self-heal-window-size < 1025 |
cluster.data-self-heal-algorithm | Selects between "full", "diff", and “reset”. The "full" algorithm copies the entire file from source to sinks. The "diff" algorithm copies to sinks only those blocks whose checksums don't match with those of source. Reset uses a heuristic model. If the file does not exist on one of the subvolumes, or a zero-byte file exists (created by entry self-heal) the entire content has to be copied anyway, so there is no benefit from using the "diff" algorithm. If the file size is about the same as page size, the entire file can be read and written with a few operations, which will be faster than "diff" which has to read checksums and then read and write. | Reset | full/diff |
cluster.min-free-disk | Specifies the percentage of disk space that must be kept free. Might be useful for non-uniform bricks. | 10% | Percentage of required minimum free disk space |
cluster.stripe-block-size | Specifies the size of the stripe unit that will be read from or written to. Optionally different stripe unit sizes can be specified for different files, with the following pattern <filename-pattern:blk-size>. | 128 KB (for all files) | size in bytes |
cluster.self-heal-daemon | Allows you to turn-off proactive self-heal on replicated volumes. | on | On | Off |
diagnostics.brick-log-level | Changes the log-level of the bricks. | INFO | BUG|WARNING|ERROR|CRITICAL|NONE|TRACE |
diagnostics.client-log-level | Changes the log-level of the clients. | INFO | BUG|WARNING|ERROR|CRITICAL|NONE|TRACE |
diagnostics.latency-measurement | Statistics related to the latency of each operation would be tracked. | off | On | Off |
diagnostics.dump-fd-stats | Statistics related to file-operations would be tracked. | off | On | Off |
features.quota-timeout | For performance reasons, quota caches the directory sizes on client. You can set timeout indicating the maximum duration of directory sizes in cache, from the time they are populated, during which they are considered valid. | 0 | 0 < 3600 secs |
geo-replication.indexing | Use this option to automatically sync the changes in the filesystem from Master to Slave. | off | On | Off |
network.frame-timeout | The time frame after which the operation has to be declared as dead, if the server does not respond for a particular operation. | 1800 (30 mins) | 1800 secs |
network.ping-timeout |
The time duration for which the client waits to check if the server is responsive. When a ping timeout happens, there is a network disconnect between the client and server. All resources held by server on behalf of the client get cleaned up. When a reconnection happens, all resources will need to be re-acquired before the client can resume its operations on the server. Additionally, the locks will be acquired and the lock tables updated.
This reconnect is a very expensive operation and should be avoided.
| 42 Secs | 42 Secs |
nfs.enable-ino32 |
For 32-bit nfs clients or applications that do not support 64-bit inode numbers or large files, use this option from the CLI to make Gluster NFS return 32-bit inode numbers instead of 64-bit inode numbers. Applications that will benefit are those that were either:
* Built 32-bit and run on 32-bit machines.
* Built 32-bit on 64-bit systems.
* Built 64-bit but use a library built 32-bit, especially relevant for python and perl scripts.
Either of the conditions above can lead to application on Linux NFS clients failing with "Invalid argument" or "Value too large for defined data type" errors.
| off | On | Off |
nfs.volume-access | Set the access type for the specified sub-volume. | read-write | read-write|read-only |
nfs.trusted-write |
If there is an UNSTABLE write from the client, STABLE flag will be returned to force the client to not send a COMMIT request.
In some environments, combined with a replicated GlusterFS setup, this option can improve write performance. This flag allows users to trust Gluster replication logic to sync data to the disks and recover when required. COMMIT requests if received will be handled in a default manner by fsyncing. STABLE writes are still handled in a sync manner.
| off | On | Off |
nfs.trusted-sync | All writes and COMMIT requests are treated as async. This implies that no write requests are guaranteed to be on server disks when the write reply is received at the NFS client. Trusted sync includes trusted-write behavior. | off | On | Off |
nfs.export-dir | By default, all sub-volumes of NFS are exported as individual exports. Now, this option allows you to export only the specified subdirectory or subdirectories in the volume. This option can also be used in conjunction with nfs3.export-volumes option to restrict exports only to the subdirectories specified through this option. You must provide an absolute path. | Enabled for all sub directories. | Enable|Disable |
nfs.export-volumes | Enable/Disable exporting entire volumes, instead if used in conjunction with nfs3.export-dir, can allow setting up only subdirectories as exports. | on | On | Off |
nfs.rpc-auth-unix | Enable/Disable the AUTH_UNIX authentication type. This option is enabled by default for better interoperability. However, you can disable it if required. | on | On | Off |
nfs.rpc-auth-null | Enable/Disable the AUTH_NULL authentication type. It is not recommended to change the default value for this option. | on | On | Off |
nfs.rpc-auth-allow<IP- Addresses> | Allow a comma separated list of addresses and/or hostnames to connect to the server. By default, all clients are disallowed. This allows you to define a general rule for all exported volumes. | Reject All | IP address or Host name |
nfs.rpc-auth-reject IP- Addresses | Reject a comma separated list of addresses and/or hostnames from connecting to the server. By default, all connections are disallowed. This allows you to define a general rule for all exported volumes. | Reject All | IP address or Host name |
nfs.ports-insecure | Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. | off | On | Off |
nfs.addr-namelookup | Turn-off name lookup for incoming client connections using this option. In some setups, the name server can take too long to reply to DNS queries resulting in timeouts of mount requests. Use this option to turn off name lookups during address authentication. Note, turning this off will prevent you from using hostnames in rpc-auth.addr.* filters. | on | On | Off |
nfs.register-with- portmap | For systems that need to run multiple NFS servers, you need to prevent more than one from registering with portmap service. Use this option to turn off portmap registration for Gluster NFS. | on | On | Off |
nfs.port <PORT- NUMBER> | Use this option on systems that need Gluster NFS to be associated with a non-default port number. | 38465- 38467 | |
nfs.disable | Turn-off volume being exported by NFS | off | On | Off |
performance.write-behind-window-size | Size of the per-file write-behind buffer. | 1 MB | Write-behind cache size |
performance.io-thread-count | The number of threads in IO threads translator. | 16 | 0 < io-threads < 65 |
performance.flush-behind | If this option is set ON, instructs write-behind translator to perform flush in background, by returning success (or any errors, if any of previous writes were failed) to application even before flush is sent to backend filesystem. | On | On | Off |
performance.cache-max-file-size | Sets the maximum file size cached by the io-cache translator. Can use the normal size descriptors of KB, MB, GB,TB or PB (for example, 6GB). Maximum size uint64. | 2 ^ 64 -1 bytes | size in bytes |
performance.cache-min-file-size | Sets the minimum file size cached by the io-cache translator. Values same as "max" above. | 0B | size in bytes |
performance.cache-refresh-timeout | The cached data for a file will be retained till 'cache-refresh-timeout' seconds, after which data re-validation is performed. | 1 sec | 0 < cache-timeout < 61 |
performance.cache-size | Size of the read cache. | 32 MB | size in bytes |
server.allow-insecure | Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. | on | On | Off |
# gluster peer probe HOSTNAME
# gluster peer probe server4 Probe successful
# gluster volume add-brick VOLNAME NEW-BRICK
# gluster volume add-brick test-volume server4:/exp4 Add Brick successful
# gluster volume info
Volume Name: test-volume Type: Distribute Status: Started Number of Bricks: 4 Bricks: Brick1: server1:/exp1 Brick2: server2:/exp2 Brick3: server3:/exp3 Brick4: server4:/exp4
# gluster volume remove-brick VOLNAME
BRICK
# gluster volume remove-brick test-volume server2:/exp2 Removing brick(s) can result in data loss. Do you want to Continue? (y/n)
Remove Brick successful
# gluster volume info
# gluster volume info Volume Name: test-volume Type: Distribute Status: Started Number of Bricks: 3 Bricks: Brick1: server1:/exp1 Brick3: server3:/exp3 Brick4: server4:/exp4
# gluster volume replace-brick VOLNAME
BRICK
NEW-BRICK
start
# gluster volume replace-brick test-volume server3:/exp3 server5:exp5 start Replace brick start operation successful
# gluster volume replace-brick VOLNAME BRICK NEW-BRICK
pause
# gluster volume replace-brick test-volume server3:/exp3 server5:exp5 pause Replace brick pause operation successful
# gluster volume replace-brick VOLNAME BRICK NEW-BRICK
abort
# gluster volume replace-brick test-volume server3:/exp3 server5:exp5 abort Replace brick abort operation successful
# gluster volume replace-brick VOLNAME BRICK NEW-BRICK
status
# gluster volume replace-brick test-volume server3:/exp3 server5:/exp5 status Current File = /usr/src/linux-headers-2.6.31-14/block/Makefile Number of files migrated = 10567 Migration complete
# gluster volume replace-brick VOLNAME BRICK NEW-BRICK
commit
# gluster volume replace-brick test-volume server3:/exp3 server5:/exp5 commit replace-brick commit successful
# gluster volume info VOLNAME
# gluster volume info test-volume Volume Name: testvolume Type: Replicate Status: Started Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: server1:/exp1 Brick2: server2:/exp2 Brick3: server4:/exp4 Brick4: server5:/exp5 The new volume details are displayed.
# gluster volume rebalance VOLNAME
status
# gluster volume rebalance test-volume status Rebalance in progress: rebalanced 399 files of size 302047 (total files scanned 765)
# gluster volume rebalance test-volume status Rebalance completed: rebalanced 3107 files of size 1428745 (total files scanned 6000)
# gluster volume rebalance test-volume status Rebalance completed!
# gluster volume rebalance VOLNAME
stop
# gluster volume rebalance test-volume stop Stopping rebalance on volume test-volume has been successful
# gluster volume rebalance VOLNAME
fix-layout start
command will fix the layout information so that the files can actually go to newly added nodes too. When this command is issued, all the file stat information which is already cached will get revalidated.
# gluster volume rebalance VOLNAME
migrate-data start
command to rebalance data among the servers.
# gluster volume rebalance VOLNAME
fix-layout start
# gluster volume rebalance test-volume fix-layout start Starting rebalance on volume test-volume has been successful
# gluster volume rebalance VOLNAME
migrate data start,
to rebalance data among the servers. For effective data rebalancing, you should fix the layout first.
# gluster volume rebalance VOLNAME
migrate-data start
# gluster volume rebalance test-volume migrate-data start Starting rebalancing on volume test-volume has been successful
# gluster volume rebalance VOLNAME
start
# gluster volume rebalance test-volume start Starting rebalancing on volume test-volume has been successful
# gluster volume stop VOLNAME
# gluster volume stop test-volume Stopping volume will make its data inaccessible. Do you want to continue? (y/n)
y
to confirm the operation. The output of the command displays the following:
Stopping volume test-volume has been successful
# gluster volume delete VOLNAME
# gluster volume delete test-volume Deleting volume will erase all information about the volume. Do you want to continue? (y/n)
y
to confirm the operation. The command displays the following:
Deleting volume test-volume has been successful
# find gluster-mount
-noleaf -print0 | xargs --null stat >/dev/null
Replicated Volumes | Geo-replication |
---|---|
Mirrors data across clusters | Mirrors data across geographically distributed clusters |
Provides high-availability | Ensures backing up of data for disaster recovery |
Synchronous replication (each and every file operation is sent across all the bricks) | Asynchronous replication (checks for the changes in files periodically and syncs them on detecting differences) |
# ssh-keygen -f /etc/glusterd/geo-replication/secret.pem
/etc/glusterd/geo-replication/secret.pem.pu
b to ~georep-user/.ssh/authorized_keys
file.
~georep-user/.ssh/authorized_keys
file if it does not exist, so that only georep-user has permission to access the .ssh directory and its subdirectories. As of now, georep-user must be a superuser or an alias of it, but this restriction will be removed in a future release.
# gluster volume geo-replication MASTER SLAVE
start
# gluster volume geo-replication Volume1 example.com:/data/remote_dir start Starting geo-replication session between Volume1 example.com:/data/remote_dir has been successful
# gluster volume geo-replication MASTER SLAVE
status
# gluster volume geo-replication Volume1 example.com:/data/remote_dir status
# gluster volume geo-replication Volume1 example.com:/data/remote_dir status MASTER SLAVE STATUS ______ ______________________________ ____________ Volume1 root@example.com:/data/remote_dir Starting....
# gluster volume geo-replication Volume1 example.com:/data/remote_dir status MASTER SLAVE STATUS ______ ______________________________ ____________ Volume1 root@example.com:/data/remote_dir Starting....
# gluster volume geo-replication MASTER SLAVE
status
# gluster volume geo-replication Volume1 example.com:/data/remote_dir status
# gluster volume geo-replication MASTER status
# gluster volume geo-replication Volume1 example.com:/data/remote_dir status MASTER SLAVE STATUS ______ ______________________________ ____________ Volume1 ssh://root@example.com:gluster://127.0.0.1:remove_volume OK Volume1 ssh://root@example.com:file:///data/remote_dir OK
# gluster volume geo-replication MASTER SLAVE
config [options]
# gluster volume geo-replication Volume1 example.com:/data/remote_dir config
# gluster volume geo-replication MASTER SLAVE
stop
# gluster volume geo-replication Volume1 example.com:/data/remote_dir stop Stopping geo-replication session between Volume1 and example.com:/data/remote_dir has been successful
machine1# gluster volume info Type: Distribute Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: machine1:/export/dir16 Brick2: machine2:/export/dir16 Options Reconfigured: geo-replication.indexing: on
# gluster volume geo-replication Volume1 root@example.com:/data/remote_dir status MASTER SLAVE STATUS ______ ______________________________ ____________ Volume1 root@example.com:/data/remote_dir OK
client# ls /mnt/gluster | wc –l 100
example.com# ls /data/remote_dir/ | wc –l 100
# gluster volume geo-replication Volume1 root@example.com:/data/remote_dir status MASTER SLAVE STATUS ______ ______________________________ ____________ Volume1 root@example.com:/data/remote_dir Faulty
client # ls /mnt/gluster | wc –l 52
Example.com# # ls /data/remote_dir/ | wc –l 100
# gluster volume geo-replication MASTER SLAVE
stop
machine1# gluster volume geo-replication Volume1 example.com:/data/remote_dir stop Stopping geo-replication session between Volume1 & example.com:/data/remote_dir has been successful
# gluster volume geo-replication MASTER SLAVE
stop
command on all active geo-replication sessions of master volume.
# gluster volume replace-brick VOLNAME BRICK NEW-BRICK
start
machine1# gluster volume replace-brick Volume1 machine2:/export/dir16 machine3:/export/dir16 start Replace-brick started successfully
# gluster volume replace-brick VOLNAME BRICK NEW-BRICK
commit force
machine1# gluster volume replace-brick Volume1 machine2:/export/dir16 machine3:/export/dir16 commit force Replace-brick commit successful
# gluster volume info VOLNAME
machine1# gluster volume info Volume Name: Volume1 Type: Distribute Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: machine1:/export/dir16 Brick2: machine3:/export/dir16 Options Reconfigured: geo-replication.indexing: on
example.com# rsync -PavhS --xattrs --ignore-existing /data/remote_dir/ client:/mnt/gluster
Client # ls | wc –l 100
example.com# ls /data/remote_dir/ | wc –l 100
# gluster volume geo-replication MASTER SLAVE
start
machine1# gluster volume geo-replication Volume1 example.com:/data/remote_dir start Starting geo-replication session between Volume1 & example.com:/data/remote_dir has been successful
# gluster volume geo-replication MASTER SLAVE
sto
p
# gluster volume set MASTER
geo-replication.indexing of
f
# gluster volume geo-replication MASTER SLAVE
start
# gluster volume profile VOLNAME
start
# gluster volume profile test-volume start Profiling started on test-volume
diagnostics.count-fop-hits: on diagnostics.latency-measurement: on
# gluster volume profile VOLNAME
info
# gluster volume profile test-volume info Brick: Test:/export/2 Cumulative Stats: Block 1b+ 32b+ 64b+ Size: Read: 0 0 0 Write: 908 28 8 Block 128b+ 256b+ 512b+ Size: Read: 0 6 4 Write: 5 23 16 Block 1024b+ 2048b+ 4096b+ Size: Read: 0 52 17 Write: 15 120 846 Block 8192b+ 16384b+ 32768b+ Size: Read: 52 8 34 Write: 234 134 286 Block 65536b+ 131072b+ Size: Read: 118 622 Write: 1341 594 %-latency Avg- Min- Max- calls Fop latency Latency Latency ___________________________________________________________ 4.82 1132.28 21.00 800970.00 4575 WRITE 5.70 156.47 9.00 665085.00 39163 READDIRP 11.35 315.02 9.00 1433947.00 38698 LOOKUP 11.88 1729.34 21.00 2569638.00 7382 FXATTROP 47.35 104235.02 2485.00 7789367.00 488 FSYNC ------------------ ------------------ Duration : 335 BytesRead : 94505058 BytesWritten : 195571980
# gluster volume profile VOLNAME
stop
# gluster volume profile test-volume
stop
Profiling stopped on test-volume
# gluster volume top VOLNAME
open [brick BRICK-NAME
] [list-cnt cnt
]
server:/export
of test-volume
and list top 10 open calls:
# gluster volume top test-volume
open brick server:/export/
list-cnt 10
Brick: server:/export/dir1
Current open fd's: 34 Max open fd's: 209
==========Open file stats======== open file name call count 2 /clients/client0/~dmtmp/PARADOX/ COURSES.DB 11 /clients/client0/~dmtmp/PARADOX/ ENROLL.DB 11 /clients/client0/~dmtmp/PARADOX/ STUDENTS.DB 10 /clients/client0/~dmtmp/PWRPNT/ TIPS.PPT 10 /clients/client0/~dmtmp/PWRPNT/ PCBENCHM.PPT 9 /clients/client7/~dmtmp/PARADOX/ STUDENTS.DB 9 /clients/client1/~dmtmp/PARADOX/ STUDENTS.DB 9 /clients/client2/~dmtmp/PARADOX/ STUDENTS.DB 9 /clients/client0/~dmtmp/PARADOX/ STUDENTS.DB 9 /clients/client8/~dmtmp/PARADOX/ STUDENTS.DB
# gluster volume top VOLNAME
read [brick BRICK-NAME
] [list-cnt cnt
]
# gluster volume top test-volume
read brick server:/export
list-cnt 10
Brick:
server:/export/dir1
==========Read file stats======== read filename call count 116 /clients/client0/~dmtmp/SEED/LARGE.FIL 64 /clients/client0/~dmtmp/SEED/MEDIUM.FIL 54 /clients/client2/~dmtmp/SEED/LARGE.FIL 54 /clients/client6/~dmtmp/SEED/LARGE.FIL 54 /clients/client5/~dmtmp/SEED/LARGE.FIL 54 /clients/client0/~dmtmp/SEED/LARGE.FIL 54 /clients/client3/~dmtmp/SEED/LARGE.FIL 54 /clients/client4/~dmtmp/SEED/LARGE.FIL 54 /clients/client9/~dmtmp/SEED/LARGE.FIL 54 /clients/client8/~dmtmp/SEED/LARGE.FIL
# gluster volume top VOLNAME
write [brick BRICK-NAME
] [list-cnt cnt
]
server:/export
of test-volume
:
# gluster volume top test-volume
write brick server:/export
list-cnt 10
Brick: server:/export/dir1
==========Write file stats======== write call count filename 83 /clients/client0/~dmtmp/SEED/LARGE.FIL 59 /clients/client7/~dmtmp/SEED/LARGE.FIL 59 /clients/client1/~dmtmp/SEED/LARGE.FIL 59 /clients/client2/~dmtmp/SEED/LARGE.FIL 59 /clients/client0/~dmtmp/SEED/LARGE.FIL 59 /clients/client8/~dmtmp/SEED/LARGE.FIL 59 /clients/client5/~dmtmp/SEED/LARGE.FIL 59 /clients/client4/~dmtmp/SEED/LARGE.FIL 59 /clients/client6/~dmtmp/SEED/LARGE.FIL 59 /clients/client3/~dmtmp/SEED/LARGE.FIL
# gluster volume top VOLNAME
opendir [brick BRICK-NAME
] [list-cnt cnt
]
# gluster volume top test-volume
opendir brick server:/export
list-cnt 10
Brick: server:/export/dir1
==========Directory open stats======== Opendir count directory name 1001 /clients/client0/~dmtmp 454 /clients/client8/~dmtmp 454 /clients/client2/~dmtmp 454 /clients/client6/~dmtmp 454 /clients/client5/~dmtmp 454 /clients/client9/~dmtmp 443 /clients/client0/~dmtmp/PARADOX 408 /clients/client1/~dmtmp 408 /clients/client7/~dmtmp 402 /clients/client4/~dmtmp
# gluster volume top VOLNAME
readdir [brick BRICK-NAME
] [list-cnt cnt
]
server:/export
of test-volume
:
# gluster volume top test-volume
readdir brick server:/export
list-cnt 10
Brick: server:/export/dir1
==========Directory readdirp stats======== readdirp count directory name 1996 /clients/client0/~dmtmp 1083 /clients/client0/~dmtmp/PARADOX 904 /clients/client8/~dmtmp 904 /clients/client2/~dmtmp 904 /clients/client6/~dmtmp 904 /clients/client5/~dmtmp 904 /clients/client9/~dmtmp 812 /clients/client1/~dmtmp 812 /clients/client7/~dmtmp 800 /clients/client4/~dmtmp
==========Read throughput file stats======== read filename Time through put(MBp s) 2570.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31 TRIDOTS.POT 15:38:36.894610 2570.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31 PCBENCHM.PPT 15:38:39.815310 2383.00 /clients/client2/~dmtmp/SEED/ -2011-01-31 MEDIUM.FIL 15:52:53.631499 2340.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 MEDIUM.FIL 15:38:36.926198 2299.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 LARGE.FIL 15:38:36.930445 2259.00 /clients/client0/~dmtmp/PARADOX/ -2011-01-31 COURSES.X04 15:38:40.549919 2221.00 /clients/client0/~dmtmp/PARADOX/ -2011-01-31 STUDENTS.VAL 15:52:53.298766 2221.00 /clients/client3/~dmtmp/SEED/ -2011-01-31 COURSES.DB 15:39:11.776780 2184.00 /clients/client3/~dmtmp/SEED/ -2011-01-31 MEDIUM.FIL 15:39:10.251764 2184.00 /clients/client5/~dmtmp/WORD/ -2011-01-31 BASEMACH.DOC 15:39:09.336572This command will initiate a dd for the specified count and block size and measures the corresponding throughput.
# gluster volume top VOLNAME
read-perf [bs blk-size
count count
] [brick BRICK-NAME
] [list-cnt cnt
]
# gluster volume top test-volume
read-perf bs 256 count 1 brick server:/export/
list-cnt 10
Brick: server:/export/dir1 256 bytes (256 B) copied, Throughput: 4.1 MB/s
==========Read throughput file stats======== read filename Time through put(MBp s) 2912.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31 TRIDOTS.POT 15:38:36.896486 2570.00 /clients/client0/~dmtmp/PWRPNT/ -2011-01-31 PCBENCHM.PPT 15:38:39.815310 2383.00 /clients/client2/~dmtmp/SEED/ -2011-01-31 MEDIUM.FIL 15:52:53.631499 2340.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 MEDIUM.FIL 15:38:36.926198 2299.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 LARGE.FIL 15:38:36.930445 2259.00 /clients/client0/~dmtmp/PARADOX/ -2011-01-31 COURSES.X04 15:38:40.549919 2221.00 /clients/client9/~dmtmp/PARADOX/ -2011-01-31 STUDENTS.VAL 15:52:53.298766 2221.00 /clients/client8/~dmtmp/PARADOX/ -2011-01-31 COURSES.DB 15:39:11.776780 2184.00 /clients/client3/~dmtmp/SEED/ -2011-01-31 MEDIUM.FIL 15:39:10.251764 2184.00 /clients/client5/~dmtmp/WORD/ -2011-01-31 BASEMACH.DOC 15:39:09.336572
# gluster volume top VOLNAME
write-perf [bs blk-size
count count
] [brick BRICK-NAME
] [list-cnt cnt
]
server:/export/
of test-volume
, 256 block size of count 1, and list count 10:
# gluster volume top test-volume
write-perf bs 256 count 1 brick server:/export/
list-cnt 10
Brick
: server:/export/dir1
256 bytes (256 B) copied, Throughput: 2.8 MB/s
==========Write throughput file stats======== write filename Time throughput (MBps) 1170.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 SMALL.FIL 15:39:09.171494 1008.00 /clients/client6/~dmtmp/SEED/ -2011-01-31 LARGE.FIL 15:39:09.73189 949.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 MEDIUM.FIL 15:38:36.927426 936.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 LARGE.FIL 15:38:36.933177 897.00 /clients/client5/~dmtmp/SEED/ -2011-01-31 MEDIUM.FIL 15:39:09.33628 897.00 /clients/client6/~dmtmp/SEED/ -2011-01-31 MEDIUM.FIL 15:39:09.27713 885.00 /clients/client0/~dmtmp/SEED/ -2011-01-31 SMALL.FIL 15:38:36.924271 528.00 /clients/client5/~dmtmp/SEED/ -2011-01-31 LARGE.FIL 15:39:09.81893 516.00 /clients/client6/~dmtmp/ACCESS/ -2011-01-31 FASTENER.MDB 15:39:01.797317
# gluster volume quota VOLNAME
enable
# gluster volume quota test-volume enable Quota is enabled on /test-volume
# gluster volume quota VOLNAME
disable
# gluster volume quota test-volume disable Quota translator is disabled on /test-volume
# gluster volume quota VOLNAME
limit-usage /directory
limit-value
# gluster volume quota test-volume limit-usage /data 10GB Usage limit has been set on /data
# gluster volume quota VOLNAME
list
# gluster volume quota test-volume list
Path__________Limit______Set Size
/Test/data 10 GB 6 GB
/Test/data1 10 GB 4 GB
# gluster volume quota VOLNAME
list /directory name
# gluster volume quota test-volume list /data
Path__________Limit______Set Size
/Test/data 10 GB 6 GB
# gluster volume set VOLNAME
features.quota-timeout value
# gluster volume set test-volume features.quota-timeout 5 Set volume successful
# gluster volume quota VOLNAME
remove /directory name
# gluster volume quota test-volume remove /data Usage limit set on /data is removed
antony
(even though there are other users that belong to the group john
).
# mount -o acl device-name
partition
# mount -o acl /dev/sda1 /export1
/etc/fstab
file, add the following entry for the partition to include the POSIX ACLs option:
LABEL=/work /export1 ext3 rw, acl 14
# mount –t glusterfs -o acl severname:volume-id
mount point
# mount -t glusterfs -o acl 198.192.198.234:glustervolume /mnt/gluster
# setfacl –m entry type
file
r
(read), w
(write), and x
(execute). You must specify the ACL entry in the following format and can specify multiple entry types separated by commas.
ACL Entry | Description |
---|---|
u:uid:<permission> | Sets the access ACLs for a user. You can specify user name or UID |
g:gid:<permission> | Sets the access ACLs for a group. You can specify group name or GID. |
m:<permission> | Sets the effective rights mask. The mask is the combination of all access permissions of the owning group and all of the user and group entries. |
o:<permission> | Sets the access ACLs for users other than the ones in the group for the file. |
# setfacl -m u:antony:rw /mnt/gluster/data/testfile
# setfacl –m –-set entry type directory
# setfacl –m --set o::r /mnt/gluster/data
# getfacl path/filename
# getfacl /mnt/gluster/data/test/sample.jpg # owner: antony # group: antony user::rw- group::rw- other::r--
# getfacl directory name
# getfacl /mnt/gluster/data/doc # owner: antony # group: antony user::rw- user:john:r-- group::r-- mask::r-- other::r-- default:user::rwx default:user:antony:rwx default:group::r-x default:mask::rwx default:other::r-x
# setfacl -x ACL entry type file
# setfacl -x u:antony /mnt/gluster/data/test-file
--with-acl-support
option, so no special flags are required when accessing or mounting a Samba share.
# gluster volume log filename VOLNAME DIRECTORY
# gluster volume log filename test-volume /var/log/test-volume/ log filename : successful
# gluster volume log locate VOLNAME
# gluster volume log locate test-volume log file location: /var/log/test-volume
# gluster volume log rotate test-volume log rotate successful
gluster volume geo-replication MASTER SLAVE
config log-file
# gluster volume geo-replication Volume1 example.com:/data/remote_dir config log-file
# gluster volume geo-replication Volume1 example.com:/data/remote_dir config session-owner 5f6e5200-756f-11e0-a1f0-0800200c9a66
# gluster volume geo-replication /data/remote_dir config log-file /var/log/gluster/${session-owner}:remote-mirror.log
/var/log/gluster/5f6e5200-756f-11e0-a1f0-0800200c9a66:remote-mirror.log
gluster-command
for it to point to the exact location.
remote-gsyncd-command
for it to point to the exact place where gsyncd is located.
$ /etc/init.d/portmap start
$ /etc/init.d/rpcbind start
$ rpc.statd
$ /etc/init.d/portmap start
$ /etc/init.d/rpcbind start
$ /etc/init.d/portmap start
$ /etc/init.d/rpcbind start
$ /etc/init.d/nfs-kernel-server stop
$ /etc/init.d/nfs stop
option rpc-auth.addr.namelookup off
$ mount nfsserver
:export
-o vers=3 mount-point
-D_FILE_OFFSET_BITS=64
Command | Description | |
---|---|---|
Volume | ||
volume info [all | VOLNAME] | Displays information about all volumes, or the specified volume. | |
volume create NEW-VOLNAME [stripe COUNT] [replica COUNT] [transport tcp | rdma | tcp,rdma] NEW-BRICK ... | Creates a new volume of the specified type using the specified bricks and transport type (the default transport type is tcp). | |
volume delete VOLNAME | Deletes the specified volume. | |
volume start VOLNAME | Starts the specified volume. | |
volume stop VOLNAME [force] | Stops the specified volume. | |
volume rename VOLNAME NEW-VOLNAME | Renames the specified volume. | |
volume help | Displays help for the volume command. | |
Brick | ||
volume add-brick VOLNAME NEW-BRICK ... | Adds the specified brick to the specified volume. | |
volume replace-brick VOLNAME (BRICK NEW-BRICK) start | pause | abort | status | Replaces the specified brick. | |
volume remove-brick VOLNAME [(replica COUNT)|(stripe COUNT)] BRICK ... | Removes the specified brick from the specified volume. | |
Rebalance | ||
volume rebalance VOLNAME start | Starts rebalancing the specified volume. | |
volume rebalance VOLNAME stop | Stops rebalancing the specified volume. | |
volume rebalance VOLNAME status | Displays the rebalance status of the specified volume. | |
Log | ||
volume log filename VOLNAME [BRICK] DIRECTORY | Sets the log directory for the corresponding volume/brick. | |
volume log rotate VOLNAME [BRICK] | Rotates the log file for corresponding volume/brick. | |
volume log locate VOLNAME [BRICK] | Locates the log file for corresponding volume/brick. | |
Peer | ||
peer probe HOSTNAME | Probes the specified peer. | |
peer detach HOSTNAME | Detachs the specified peer. | |
peer status | Displays the status of peers. | |
peer help | Displays help for the peer command. | |
Geo-replication | ||
volume geo-replication MASTER SLAVE start |
Start geo-replication between the hosts specified by MASTER and SLAVE. You can specify a local master volume as :VOLNAME.
You can specify a local slave volume as :VOLUME and a local slave directory as /DIRECTORY/SUB-DIRECTORY. You can specify a remote slave volume as DOMAIN::VOLNAME and a remote slave directory as DOMAIN:/DIRECTORY/SUB-DIRECTORY.
| |
volume geo-replication MASTER SLAVE stop |
Stop geo-replication between the hosts specified by MASTER and SLAVE. You can specify a local master volume as :VOLNAME and a local master directory as /DIRECTORY/SUB-DIRECTORY.
You can specify a local slave volume as :VOLNAME and a local slave directory as /DIRECTORY/SUB-DIRECTORY. You can specify a remote slave volume as DOMAIN::VOLNAME and a remote slave directory as DOMAIN:/DIRECTORY/SUB-DIRECTORY.
| |
volume geo-replication MASTER SLAVE config [options] | Configure geo-replication options between the hosts specified by MASTER and SLAVE. | |
gluster-command COMMAND | The path where the gluster command is installed. | |
gluster-log-level LOGFILELEVEL | The log level for gluster processes. | |
log-file LOGFILE | The path to the geo-replication log file. | |
log-level LOGFILELEVEL | The log level for geo-replication. | |
remote-gsyncd COMMAND | The path where the gsyncd binary is installed on the remote machine. | |
ssh-command COMMAND | The ssh command to use to connect to the remote machine (the default is ssh). | |
rsync-command COMMAND | The rsync command to use for synchronizing the files (the default is rsync). | |
timeout SECONDS | The timeout period. | |
sync-jobs N | The number of simultaneous files/directories that can be synchronized. | |
volume_id= UID | The command to delete the existing master UID for the intermediate/slave node. | |
Other | ||
help | Display the command options. | |
quit | Exit the gluster command line interface. |
Option | Description |
---|---|
Basic | |
-l=LOGFILE, --log-file=LOGFILE | Files to use for logging (the default is /usr/local/var/log/glusterfs/glusterfs.log). |
-L=LOGLEVEL, --log-level=LOGLEVEL | Logging severity. Valid options are TRACE, DEBUG, INFO, WARNING, ERROR and CRITICAL (the default is INFO). |
--debug | Runs the program in debug mode. This option sets --no-daemon, --log-level to DEBUG, and --log-file to console. |
-N, --no-daemon | Runs the program in the foreground. |
Miscellaneous | |
-?, --help | Displays this help. |
--usage | Displays a short usage message. |
-V, --version | Prints the program version. |
SERVER:EXPORT
myhostname:/exports/myexportdir/
/etc/glusterd/vols/VOLNAME
.
Revision History | |||
---|---|---|---|
Revision 1-8 | Mon Dec 20 2011 | ||
| |||
Revision 1-1 | Fri Nov 18 2011 | ||
|