Open Source Software Technical Articles

Want the Best of the Wazi Blogs Delivered Directly to your Inbox?

Subscribe to Wazi by Email

Your email:

Connect with Us!

Current Articles | RSS Feed RSS Feed

Create distributed storage with Gluster

  
  
  

If you're looking for Linux-based, hardware-agnostic storage software, check out Gluster, an open source project for creating a distributed filesystem. It provides fast performance, high availability, and horizontal scalability by spreading storage volumes over redundant cluster nodes. Here's how you can build a Gluster distributed storage system yourself.

Gluster's storage is build up by what it calls bricks, which are exported directories allocated on the cluster nodes. Cluster nodes are united in trusted pools that together provide storage services and share disk resources.

Gluster doesn't require any special hardware. You need at least two servers to act as nodes in the filesystem, to provide redundancy and make use of Gluster's essential features for performance and recovery. Each node must have at least 1GB of RAM; having more RAM may allow you to provide in-memory caching for the storage, thus speeding the I/O operations. Each node needs at least 1Gbps network connectivity and at least two disks – one for the operating system and one more for the storage. The higher the I/O speed of the media, of course, the better storage performance. Gluster runs best on CentOS but is also supported on other 64-bit Linux distributions. No 32-bit distributions are supported.

Install Gluster from its own yum repository, which always provides the latest Gluster version. First, download Gluster's repository with the command wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo. This command places the repo file in the directory /etc/yum.repos.d/, thus enabling the repo.

Now you can install the packages glusterfs-fuse, the userspace program for managing GlusterFS, and glusterfs-server, the server daemon, with the command yum install glusterfs{-fuse,-server}.

Finally, start Gluster's daemon for the first time with the command service glusterd start. To make it start and stop automatically with the system, run the command chkconfig glusterd on.

Follow the same process on each Gluster server.

Configure the network and the firewall

For optimal performance and security you should run the Gluster cluster inside a private and secured network. In such a network you can disable the default CentOS firewall because the network should be accessible only by trusted clients. To do this, use the command chkconfig iptables off && service iptables stop.

If you cannot provide a private network for the storage cluster, you can use iptables to allow only predefined clients to access Gluster's services. To do so, allow the following with iptables:

  • RPC incoming connectivity – Gluster's processes communicate using RPC, so RCP's port, TCP and UDP port 111, has to be allowed for incoming connections. Use the commands iptables -I INPUT -m state --state NEW -m tcp -p tcp -s X.X.X.X/24 --dport 111 -j ACCEPT; iptables -I INPUT -m state --state NEW -m udp -p udp -s X.X.X.X/24 --dport 111 -j ACCEPT;.
  • Gluster's own services – For Gluster to access its bricks on each cluster node, you have to allow TCP ports 24007, 24008, and 24009, plus an additional number of ports in sequence equivalent to the number of bricks across all volumes. Thus if you have two bricks in the cluster, you have to allow ports from 24007 to 24011 (24009 plus 2). The command for this is iptables -A INPUT -m state --state NEW -m tcp -p tcp -s X.X.X.X/24 --dport 24007:24011 -j ACCEPT.
  • Other access – Allow other remote connections if you want to use NFS, iSCSI, or other connectivity. You don't have to allow other ports if you plan to mount volumes with the native GlusterFS filesystem.

The above rules allow all cluster nodes and clients from the X.X.X.X C-class network to communicate. You can also apply these rules on a per-host basis by using Y.Y.Y.Y for the IP address of each host instead of X.X.X.X/24 for the network.

Don't forget to save the iptables configuration once you've added the new rules by using the command service iptables save.

Prepare disks for Gluster storage

To comply with best practices, use one disk for the operating system and dedicate one or more others for Gluster's storage only. Also, format the future Gluster storage disk with the XFS filesystem, which is fast, scalable, and reliable for distributed storage.

So far you've installed Gluster on one disk and have just attached another for Gluster storage. The new disk is not formatted and appears under the name /dev/sdb. You can acknowledge the disk with the command fdisk -l, which should show the new disk with valid partitions.

To prepare the disk, first use fdisk to create a primary partition on the new disk. Run the command fdisk /dev/sdb and select "n" for new partition, "p" for primary, "1" for first. Then write the new partition table to the disk by selecting "w." You should now have a new primary partition under the name /dev/sdb1.

Next, format the new partition with the XFS filesystem with the command mkfs.xfs /dev/sdb1. Create a new directory to be used as a mount point for the /dev/sdb1 partition, such as /export/brick1. Brick1 is a good name because this partition will be used for the first brick, which is to say the first exported directory found on the first node.

Finally, make sure the new partition /export/brick1 is automatically mounted during system boot time by adding a new row to /etc/fstab: /dev/sdb1 /export/brick1 xfs defaults 1 2.

Set trusted storage pools

Gluster unites cluster nodes into a trusted pool, and all nodes inside this pool share disk resources and offer storage services. Acting as one logical entity, a trusted pool provides redundancy, better storage performance, and scalability.

To manage a new trusted pool, and in fact manage all parts of Gluster, use the Gluster console manager at /usr/sbin/gluster. This command-line tool accepts arguments for options; there is no GUI.

A storage pool is created automatically when you install and start Gluster on the first cluster node. From the first node you can add the rest of the nodes after you install and start Gluster on them.

For example, suppose you installed Gluster on a node with the IP address 10.1.1.1 and you want to add to the pool a node with IP 10.1.1.2. To accomplish this use the Gluster console manager with the arguments peer probe [host], as in /usr/sbin/gluster peer probe 10.1.1.2.

Similarly, you can remove servers from the storage pool with the command /usr/sbin/gluster peer detach [host].

After adding servers to or removing them from the trusted storage pool, you should check the pool's status. Use the command /usr/sbin/gluster peer status to confirm you have the correct number of peers and their statuses.

How to manage Gluster volumes

A Gluster volume is a logical entity composed of bricks, which are exported directories from servers inside the trusted pool. One pool can support numerous volumes. You can start managing Gluster volumes as soon as you have a trusted storage pool set up.

Gluster supports many advanced options for its volumes. For instance, you can create a distributed, striped, and replicated volume. Such a volume performs well, first because it's distributed – reads and writes happen on multiple servers. Second, the fact that it is striped ensures that large files are striped across multiple bricks and servers and thus accessed faster. High availability is ensured by the fact that the volume and its data is replicated and redundant, so if one node fails another covers for it and there is no interruption in service.

To create a distributed, striped, and replicated volume use the command /usr/sbin/gluster volume create volumename [stripe count] [replica count] host:/brickname. An example of the command would look like: /usr/sbin/gluster volume create testvolume stripe 2 replica 2 10.1.1.1:/export/brick1 10.1.1.2:/export/brick2. This creates a volume called "testvolume" striped across two bricks, replicated twice. The volume consists of two nodes and their bricks – 10.1.1.1:/export/brick1 and 10.1.1.2:/export/brick2.

Once you have a Gluster volume you can easily and seamlessly change its size without interrupting the storage operations. For example, if you want to increase the size of your volume you can add more cluster nodes with more bricks attached to them. First, add each additional node with the command /usr/sbin/gluster peer probe host. Then use the command /usr/sbin/gluster volume add-brick volumename host:/brickname. Finally, rebalance and redistribute the volume evenly across all the bricks with the command /usr/sbin/gluster volume rebalance volumename start.

You need to keep a few concepts in mind for proper volume management and configuration. First, you should use only one brick per server in a given volume so that in case of a crash it can be covered for by its replicating peer. Second, the number of bricks should be a multiple of the replica number – that is, how many times data should be replicated – so that valid replication can be established. For example, if you want data to be replicated twice, you should build your cluster from any number of bricks that's a multiple of two. Last, the number of stripes should be equal to the number of bricks so that you can gain performance by reading from and writing to multiple locations simultaneously.

Connect a client to the storage

The best way to connect to Gluster storage is through GlusterFS, which allows clients to take advantage of Gluster's clustering and high reliability features. GlusterFS offers better performance and reliability than non-native communication methods such as NFS or iSCSI, and GlusterFS allows clients to communicate simultaneously with multiple cluster nodes.

GlusterFS is based on filesystem in userspace (FUSE). Prior to installing GlusterFS you have to install the fuse and fuse-libs packages. In CentOS these prerequisite packages can be installed through the official repository with the command yum install fuse fuse-libs. You must run this installation only on the clients from which you will connect to the storage; FUSE is automatically installed with the Gluster server software.

Restart the server after installing the fuse package in order to load the fuse kernel module, or if you want to postpone the restart you can manually load the module using the command modprobe fuse.

Next, to install GlusterFS, you have to add Gluster's repository just as you added it on the cluster nodes, with the command wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo, and install glusterfs-fuse with the command yum install glusterfs-fuse.

Once you've installed all the necessary software you can mount a GlusterFS volume. You can use the /bin/mount command with "glusterfs" as parameter for the filesystem (-t option) and a few other GlusterFS-specific parameters; for example, mount -t glusterfs -o backupvolfile-server=10.1.1.2 10.1.1.1:/testvolume /media/gluster.

This example mounts the volume "testvolume" from the Gluster node 10.1.1.1. The option backupvolfile-server instructs the client to work with the replicating Gluster node 10.1.1.2 in case 10.1.1.1 fails.

Once the volume is mounted, Linux treats it just as any other disk resource. Multiple clients can mount the same volume and work with it at the same time.

Gluster is a powerful distributed storage system that helps meet the growing need for better Linux-native storage. It's easy to implement and work with, limited only by the system resources you can dedicate to it, and doesn't require you to buy any expensive vendor-specific hardware.




This work is licensed under a Creative Commons Attribution 3.0 Unported License
Creative Commons License.

Comments

This particular common dialogue using the B razil aviator pressured Cartier wrist watches to produce a arm fake chanel you can use actually throughout the trip. The actual view had been called Santos, because the inspiration with regard to making this originated from Santo-Dummont. The actual sq . bezel and also the toned options that come with the actual view continue to be well-liked by aviators of those times. The actual Pasha wrist watches had been the very first created as well as produced wrist watches through Cartier within dior replica handbags. It had been created for that Pasha associated with Marrakesh to ensure that he is able to utilize it even if floating around, therefore which makes it the very first water-resistant view. A few of the functions in the Pasha wrist watches continue to be getting used within most recent wrist watches. Cartier Container Wrist watches launched the actual rectangle-shaped instances as well as replica chanel bags. These types of wrist watches possess curved edges as well as smooth sides. Each one of these functions created the actual wrist watches much more fashionable as well as attention getting. Because of this, a number of celebs all around the replica chanel 1118 required the extravagant with regard to these types of wrist watches as well as began with them on the arms. Most well-known ladies favored the easy steel band given that they complement using their arms. Man celebs make use of leather-based shoulder straps, or even activity wrist watches embellished along with valuable materials. The actual Container wrist watches are also the chanel replica bags preferred associated with Little princess Diana. The actual Cartier Roadster wrist watches had been launched within 2001 in order to memorialize the actual 150th wedding anniversary from the organization. This particular sequence such as the additional wrist watches accomplished substantial achievement within the view louis vuitton replica bags. The actual wrist watches tend to be specifically designed for athletes. The actual vibrant sword-shaped dark oxidized fingers made from metal may be the id indication of those Cartier wrist watches. 
Posted @ Thursday, May 15, 2014 2:00 AM by replica chanel tote
I am flying to Australia tomorrow.Heard that Australians dress very simple but fashionable clothes. Then I chose some stuff from our sites. You could join me if u like, and I am sure u could find sth interesting here... 
cafeloren 
outofthevault 
avalonseafood 
khe-shri 
chennaiftz 
fpindia 
brandywinedevelopers 
khe-shri 
rhizen 
sanjaypuriarchitects 
addonix 
addonix 
arexlab 
arexlab 
dulcettetech 
incozen 
incozen 
kundan 
Posted @ Friday, August 29, 2014 1:02 AM by aaa
Post Comment
Name
 *
Email
 *
Website (optional)
Comment
 *

Allowed tags: <a> link, <b> bold, <i> italics