
Technical white paper | HP RA for Red Hat Storage Server on HP ProLiant SL4540 Gen8 Server
26
Confirm that all the storage servers are in connected state using the peer status command:
rhs01 # gluster peer status
A. Create Distribute only volume:
rhs01 # gluster volume create dist2 rhs01:/rhs/brick1/dist2
rhs02:/rhs/brick1/dist2
rhs01 # gluster volume create dist3 rhs01:/rhs/brick1/dist3
rhs02:/rhs/brick1/dist3 rhs03:/rhs/brick1/dist3
rhs01 # gluster volume create dist4 rhs01:/rhs/brick1/dist4
rhs02:/rhs/brick1/dist4 rhs03:/rhs/brick1/dist4
rhs04:/rhs/brick1/dist4
B. Create Replicate volume:
rhs01 # gluster volume create mirror2 replica 2
rhs01:/rhs/brick1/mirror2 rhs02:/rhs/brick1/mirror2
C. Create Distribute-Replicate volume:
rhs01 # gluster volume create mirror4 replica 2
rhs01:/rhs/brick1/mirror4 rhs02:/rhs/brick1/mirror4
rhs03:/rhs/brick1/mirror4 rhs04:/rhs/brick1/mirror4
Mount RHS volumes on client nodes
Mount all the RHS volumes that you created on all the four clients using the following commands:
client01 # mkdir /mnt/dist2 /mnt/dist3 /mnt/dist4 /mnt/mirror2 /mnt/mirror4
client01 # mount -t glusterfs rhs01:/dist2 /mnt/dist2
client01 # mount -t glusterfs rhs01:/dist3 /mnt/dist3
client01 # mount -t glusterfs rhs01:/dist4 /mnt/dist4
client01 # mount -t glusterfs rhs01:/mirror2 /mnt/mirror2
client01 # mount -t glusterfs rhs01:/mirror4 /mnt/mirror4
Repeat these commands on the other three clients.
Note
Even though we are using the same RHS server for mounting the volumes, the Native protocol has built-in load balancing.
The clients use the mount server initially to get the volume information, and after that they will contact the individual
storage servers directly for accessing the data. The data request does not have to go through the mount server.
Kommentare zu diesen Handbüchern