InnoDB
tables data in a tbl_name.ibd and table information in tbl.name.frm. If you are using MySQL 5.6.x or higher then you are using innodb_file_per_file as a default. This method will work even you lost original ibdata1 file.
We will require MySQL Server 5.6.x or higher and mysqlfrm from utilities package which manage to extract table information and generate CREATE TABLE query from .frm file.
First enable oracle mysql yum repository for CentOS 7/RHEL 7. Note below rpm is the latest at the time of download, please check for latest everytime.
rpm -Uvh http://dev.mysql.com/get/mysql57-community-release-el7-7.noarch.rpm.
Install require packages
yum -y install mysql-server mysql-utilities
Location for my backup database were /data-backup/sohail-wp/. Now I will recover my CREATE TABLE sql queries from .frm files using mysqlfrm utility.
a) Recover CREATE TABLE query from one table.
# mysqlfrm --server=root:root123@localhost --port=3307 --user=root /data-backup/sohail_wp/wp_users.frm # Source on localhost: ... connected. # Spawning server with --user=root. # Starting the spawned server on port 3307 ... done. # Reading .frm files # # Reading the wp_users.frm file. # # CREATE statement for wp_users.frm: # CREATE TABLE `wp_users` ( `ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `user_login` varchar(60) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', `user_pass` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', `user_nicename` varchar(50) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', `user_email` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', `user_url` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', `user_registered` datetime NOT NULL DEFAULT '0000-00-00 00:00:00', `user_activation_key` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', `user_status` int(11) NOT NULL DEFAULT '0', `display_name` varchar(250) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', PRIMARY KEY (`ID`), KEY `user_login_key` (`user_login`), KEY `user_nicename` (`user_nicename`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci #...done.
b) Recover CREATE TABLE query from all tables.
#mysqlfrm --server=root:root123@localhost --port=3307 --user=root /data-backup/sohail-wp/ > /tmp/create-table.sql
c) Run following command to replace empty lines with ; (simi-colons) in create-table.sql file . This is a quick and dirty method to make CREATE TABLE sql query executable on mysql prompt by importing the file otherwise you need to enter ; (semi-colon) after every CREATE TABLE sql query end one by one.
sed 's/^$/;/' /tmp/create-table.sql > /tmp/create-table-complete.sql
d) Create database same as original, in my case it is sohail_wp
mysql> create database sohail_wp; Query OK, 1 row affected (0.00 sec)
e) Re-create all tables from create-table-complete.sql files inside sohail_wp database.
mysql> source /tmp/create-table-complete.sql
The above comamnd will generate all tables inside the default location of mysql datadir i.e /var/lib/mysql/sohail-wp with names tbl_name.frm and tbl_name.ibd.
f) As they are newly created empty table, now we will discard tablespace to get import our data from old backup from tbl_name.ibd files.
mysql> ALTER TABLE table_name DISCARD TABLESPACE;
You need to run above ALTER TABLE command for each table. It can be scripted through command line.
#mysql -u root -p -e"show tables from sohail_wp" | grep -v Tables_in_sohail_wp | while read a; do mysql -u root -p -e "ALTER TABLE sohail_wp.$a DISCARD TABLESPACE"; done
The above command will delete all *.ibd files from /var/lib/mysql/sohail_wp/ directory. Which were having empty data as newly created table. We will replace them with backup.
g) Copy only *.ibd files old backup.
# cp /data-backup/sohail_wp/*.ibd /var/lib/mysql/sohail_wp/
Here we copied all the *.ibd files from backup which contains actual table data from our old database.
h) Now its time to get import tablespace from copied *.ibd files.
mysql> ALTER TABLE table_name IMPORT TABLESPACE;
For all tables you can below script, modified as per your requirements.
#mysql -u root -p -e"show tables from sohail_wp" | grep -v Tables_in_sohail_wp | while read a; do mysql -u root -p -e "ALTER TABLE sohail_wp.$a IMPORT TABLESPACE"; done
This helps me to restore/recover my InnoDB Database. It might help others, please comment for questions.
]]>Our setup contains exim + dovecot running imap on CPanel/WHM dedicated server. Dovecot imap using maildir format and default location for storing mail is /home/vmail/$domain/$user.
Following lines should be inserted in Router configuration area inside /etc/exim/exim.conf
###################################################################### # ROUTERS CONFIGURATION # # #.. # This router delivers an auto-reply "vacation" message if a file called 'vaction.msg' # exists in the home directory. uservacation: driver = redirect domains = +local_domains allow_filter user = vmail group = vmail file = /home/vmail/${domain}/${local_part}/.vacation.msg require_files = /home/vmail/${domain}/${local_part}/.vacation.msg # do not reply to errors or lists condition = ${if or { \ {match {$h_precedence:} {(?i)junk|bulk|list}} \ {eq {$sender_address} {}} \ } {no} {yes}} # do not reply to errors or bounces or lists senders = ! ^.*-request@.*:\ ! ^bounce-.*@.*:\ ! ^.*-bounce@.*:\ ! ^owner-.*@.*:\ ! ^postmaster@.*:\ ! ^webmaster@.*:\ ! ^listmaster@.*:\ ! ^mailer-daemon@.*:\ ! ^root@.* no_expn reply_transport = uservacation_transport unseen no_verify
Above configuration will instruct exim to route any message if you found .vacation.msg file in the home maildir of user. You can edit the maildir location to match yours user maildir.
Following lines should be inserted in Transport configuration area inside /etc/exim/exim.conf
###################################################################### # TRANSPORTS CONFIGURATION # #... uservacation_transport: driver = autoreply user = vmail group = vmail file = /home/vmail/${domain}/${local_part}^/.vacation.msg file_expand once = /home/vmail/${domain}/${local_part}^/.vacation.db # to use a flat file instead of a db specify once_file_size #once_file_size = 2K once_repeat = 14d from = ${local_part}@${domain} to = ${sender_address} subject = "Re: $h_subject"
.vacation.db will save all the mail addresses to whom it auto-replied the messages. You can edit the maildir location to match yours user maildir.
Following files inside your maildir needs to be there to send reply back to the sender. In my situation is /home/vmail/$domain/$user/
.vacation.msg
# Exim filter if ($h_subject: does not contain "SPAM?" and personal) then mail ##### This is the only thing that a user can set when they ##### ##### decide to enable vacation messaging. The vacation.msg.txt ##### expand file /home/vmail/${domain}/${local_part}/.vacation.msg.txt log /home/vmail/${domain}/${local_part}/.vacation.log to $reply_address from $local_part\@$domain subject "Unmonitored Mailbox [Re: $h_subject:]" endif
.vacation.msg.txt
Hi, This is test reply. Regards,
These two files in any user maildir will makes it auto-reply.
Please use comments for any questions.
]]>
sudo cobbler signature report Currently loaded signatures: debian: squeeze freebsd: 8.2 8.3 9.0 generic: (none) redhat: fedora16 fedora17 fedora18 rhel4 rhel5 rhel6 suse: opensuse11.2 opensuse11.3 opensuse11.4 opensuse12.1 opensuse12.2 ubuntu: oneiric precise quantal unix: (none) vmware: esx4 esxi4 esxi5 windows: (none) 9 breeds with 21 total signatures loadedsudo cobbler signature report --name=redhat Currently loaded signatures: redhat: fedora16 fedora17 fedora18 rhel4 rhel5 rhel6 Breed 'redhat' has 3 total signatures
If your cluster is connected to internet, you can update signatures files straight away using following command, it will update it with latest
sudo cobbler signature update task started: 2015-10-21_222926_sigupdate task started (id=Updating Signatures, time=Wed Oct 21 22:29:26 2015) Successfully got file from http://cobbler.github.com/signatures/latest.json *** TASK COMPLETE ***
But like me if your cluster is not connected to internet directly then you will required to download latest signatures files from following website and upload to proper location. Following steps should be follow to do it
Download signature file on your workstation
wget http://cobbler.github.com/signatures/latest.json
Upload the signature to the cobbler management server
scp latest.json /var/lib/cobbler/distro_signatures.json
Restart the cobbler service to take effect.
sudo /etc/init.d/cobblerd restartsudo cobbler signature report --name=redhat Currently loaded signatures: redhat: cloudlinux6 fedora16 fedora17 fedora18 fedora19 fedora20 fedora21 fedora22 fedora23 rhel4 rhel5 rhel6 rhel7 Breed 'redgat' has 3 total signatures
RHEL 7 version are available and you can go with your install.
]]>mkdir /mnt/{image,work} mount -o loop RHEL6.5-server.x86_64.iso /mnt/image/ cp /mnt/image/isolinux/initrd.img /mnt/work
Before extract rename initrd.img to initrd.img.xz because its compressed with xz and will remove its extension and rename again with initrd.img
cd /mnt/work mkdir initrd-new mv initrd.img initrd.img.xz xz --format=lzma initrd.img.xz –decompress cd initrd-new cpio -ivdum < ../initrd.img
I will used already installed chelsio driver from chelsio script. We were using same directory tree,
cp /lib/modules/2.6.32.431.el6.x86_64/updates/drivers/ /mnt/work/initrd-new/modules/2.6.32.431.el6.x86_64/updates/
I will used chelsio driver information here and it can be different for you. You need to confirm which hardware driver you will used to insert in initrd.img and its information from modules.* files.
cd /lib/modules/2.6.32.431.el6.x86_64/ egrep 'cxgb4|toecore|t4_tom' modules.symbols >> /mnt/work/initrd-new/modules/2.6.32-431.el6.x86_64/modules.symbols egrep 'cxgb4|toecore|t4_tom' modules.alias >> /mnt/work/initrd-new/modules/2.6.32-431.el6.x86_64/modules.alias egrep 'cxgb4|toecore|t4_tom' modules.dep >> /mnt/work/initrd-new/modules/2.6.32-431.el6.x86_64/modules.dep
This will recreate all modules.*.bin files using required driver information using modules.* files. This required because without this initrd.img will unable to load newly inserted driver.
chroot /mnt/work/initrd-new depmod -a -v exit
cd /mnt/work/initrd-new find . -print |cpio -o -H newc | xz --format=lzma > ../initrd.img
Your initrd.img is ready and you can used this new initrd.img to replaced stock initrd.img to start kickstart installation or network boot.
If you have any question please use comments.
]]>Before it was all manual to create bridge device but now KVM provide single command to do all. To create a bridge (br0) using eth0, you need to execute following command
virsh iface-bridge eth0 br0
This will create bridge (br0) device file in /etc/sysconfig/network-scripts/ and take all settings from eth0 to configure itself with configure eth0 to use br0.
You will also need to do following in all ifcfg-* file in /etc/sysconfig/network-scripts/ to disable NetworkManager. Edit all files and make sure to have following value
NM_CONTROLLED=no
After this stop NetworkManager and start network service to start bridge at startup.
chkconfig NetworkManager off chkconfig network on service NetworkManager stop service network start
You can now create or edit your virtual machine to use bridge network device (br0).
]]>
GlusterFS is an open source, powerful clustered file system capable of scaling to several petabytes of storage which is available to user under a single mount point. It uses already available disk filesystems like ext3, ext4, xfs etc to store data and client will able to access the storage as local filesystem. GlusterFS cluster aggregates storage blocks over Infiniband RDMA and/or TCP/IP interconnect in a single global namespace.
We will discuss following terms later on in this howto, so it require you to understand them before proceed.
Hardware
I will use three servers and one client for my GlusterFS installation/configuration. These can be a physical machines or virtual machines. I will be using my virtual environment for this and IP/hostname will be as follow.
host1.example.com 192.168.0.101 host2.example.com 192.168.0.102 host3.example.com 192.168.0.103 client1.example.com 192.168.0.1
Two partition require on each server because 1 will be using for OS install and 2nd will be using for our storage.
Software
For OS I will be using CentOS 6.3 and GlusterFS 3.3.1. In EPEL repository 3.2.7 is available but we will go with latest version i.e 3.3.1. This is available through GlusterFS own repository.
First we will add GlusterFS repo in our yum repositories. To do this execute following command.
wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
On Servers (host1, host2, host3) execute following command to install glusterfs server side packages.
yum -y install glusterfs glusterfs-fuse glusterfs-server
Start glusterfs services on all servers with enable them to start automatically on startup.
/etc/init.d/glusterd start chkconfig glusterfsd on
On Client execute following command to install clusterfs client side packages.
yum -y install glusterfs glusterfs-fuse
This we will later on use to mount the glusterfs on client.
Trusted storage pool are the servers which are running as gluster servers and will provide bricks for volumes. You will need to probe all servers to server1 (dont probe server1 or localhost).
Note: turn off your firewall using iptables -F command.
We will now create all three servers in a trusted storage pool and probing will be done on server1.
gluster peer probe host2 Probe successful gluster peer probe host3 Probe successful
Confirm your server status.
gluster peer status Number of Peers: 2 Hostname: host2 Uuid: b65874ab-4d06-4a0d-bd84-055ff6484efd State: Peer in Cluster (Connected) Hostname: host3 Uuid: 182e3214-44a2-46b3-ae79-769af40ec160 State: Peer in Cluster (Connected)
Now its time to create glusterfs server volume. A volume is a logical collection of bricks where each brick is an export directory on a server in the trusted storage pool.
Glusterfs gives many types in storage to create the volumes within: I will demonstrate three of them defined above and it will gives you enough knowledge to create remaining by yourself.
Use distributed volumes where you need to scale storage because in a distributed volumes files are spread randomly across the bricks in the volume.
gluster volume create dist-volume host1:/dist1 host2:/dist2 host3:/dist3 Creation of volume dist-volume has been successful. Please start the volume to access data.
Start the dist-volume
gluster volume start dist-volume Starting volume dist-volume has been successful
Check status of volume
gluster volume info Volume Name: dist-volume Type: Distribute Volume ID: b842b03f-f8db-47e9-920a-a04c2fb24458 Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: host1:/dist1 Brick2: host2:/dist2 Brick3: host3:/dist3
Now on client1.example.com we will access and test distributed volume functionality. To mount gluster volumes to access data, first we will mount it manually then add in /etc/fstab to mount it automatically whenever server restart.
Use mount command to access gluster volume.
mkdir /mnt/distributed mount.glusterfs host1.sohailriaz.com:/dist-volume /mnt/distributed/
Check it using mount command.
mount /dev/sda on / type ext4 (rw,errors=remount-ro) none on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) host1.sohailriaz.com:/dist-volume on /mnt/distributed type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
Now add following line at the end of /etc/fstab file to make it available to server on every reboot.
host1.example.com:/dist-volume /mnt/distributed glusterfs defaults,_netdev 0 0
save the file.
Now to test create following files in the mounted directory.
touch /mnt/distributed/file1 touch /mnt/distributed/file2 touch /mnt/distributed/file3 touch /mnt/distributed/file4 touch /mnt/distributed/file5 touch /mnt/distributed/file6 touch /mnt/distributed/file7 touch /mnt/distributed/file8
Check on the servers for distributed functionality
[root@host1 ~]# ls -l /dist1 total 0 -rw-r--r-- 2 root root 0 May 9 10:50 file5 -rw-r--r-- 2 root root 0 May 9 10:51 file6 -rw-r--r-- 2 root root 0 May 9 10:51 file8 [root@host2 ~]# ls -l /dist2 total 0 -rw-r--r-- 2 root root 0 May 9 10:50 file3 -rw-r--r-- 2 root root 0 May 9 10:50 file4 -rw-r--r-- 2 root root 0 May 9 10:51 file7 [root@host3 ~]# ls -l /dist3 total 0 -rw-r--r-- 2 root root 0 May 9 10:50 file1 -rw-r--r-- 2 root root 0 May 9 10:50 file2
All of the files created in mounted volume are distributed to all the servers.
Use replicated volumes in storage where high-availability and high-reliability are critical because replicated volumes create same copies of files across multiple bricks in the volume.
gluster volume create rep-volume replica 3 host1:/rep1 host2:/rep2 host3:/rep3 Creation of volume rep-volume has been successful. Please start the volume to access data.
Where replica 3 is a value to create number of copies on multiple servers, so here we need same copy on all servers.
Start the dist-volume
gluster volume start rep-volume Starting volume rep-volume has been successful
Check status of volume
gluster volume info rep-volume Volume Name: rep-volume Type: Replicate Volume ID: 0dcf51bc-376a-4bd2-8759-3d47bba49c3d Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: host1:/rep1 Brick2: host2:/rep2 Brick3: host3:/rep3
Now same as distributed volume access using mount command. To mount gluster replicated volumes to access data, first we will mount it manually then add in /etc/fstab to mount it automatically whenever server restart.
Use mount command to access gluster volume.
mkdir /mnt/replicated mount.glusterfs host1.sohailriaz.com:/rep-volume /mnt/replicated/
Check it using mount command.
mount /dev/sda on / type ext4 (rw,errors=remount-ro) none on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) host1.sohailriaz.com:/dist-volume on /mnt/distributed type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) host1.sohailriaz.com:/rep-volume on /mnt/replicated type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
Now add following line at the end of /etc/fstab file to make it available to server on every reboot.
host1.example.com:/rep-volume /mnt/replicated glusterfs defaults,_netdev 0 0
Now to test create following files in the mounted directory.
touch /mnt/replicated/file1 touch /mnt/replicated/file2 touch /mnt/replicated/file3 touch /mnt/replicated/file4 touch /mnt/replicated/file5 touch /mnt/replicated/file6 touch /mnt/replicated/file7 touch /mnt/replicated/file8
Check on the servers for replicated functionality
[root@host1 ~]# ls -l /rep1 total 0 -rw-r--r-- 2 root root 0 May 9 11:16 file1 -rw-r--r-- 2 root root 0 May 9 11:16 file2 -rw-r--r-- 2 root root 0 May 9 11:16 file3 -rw-r--r-- 2 root root 0 May 9 11:16 file4 -rw-r--r-- 2 root root 0 May 9 11:16 file5 -rw-r--r-- 2 root root 0 May 9 11:16 file6 -rw-r--r-- 2 root root 0 May 9 11:16 file7 -rw-r--r-- 2 root root 0 May 9 11:16 file8 [root@host2 ~]# ls -l /rep2 total 0 -rw-r--r-- 2 root root 0 May 9 11:16 file1 -rw-r--r-- 2 root root 0 May 9 11:16 file2 -rw-r--r-- 2 root root 0 May 9 11:16 file3 -rw-r--r-- 2 root root 0 May 9 11:16 file4 -rw-r--r-- 2 root root 0 May 9 11:16 file5 -rw-r--r-- 2 root root 0 May 9 11:16 file6 -rw-r--r-- 2 root root 0 May 9 11:16 file7 -rw-r--r-- 2 root root 0 May 9 11:16 file8 [root@host3 ~]# ls -l /rep3 total 0 -rw-r--r-- 2 root root 0 May 9 11:16 file1 -rw-r--r-- 2 root root 0 May 9 11:16 file2 -rw-r--r-- 2 root root 0 May 9 11:16 file3 -rw-r--r-- 2 root root 0 May 9 11:16 file4 -rw-r--r-- 2 root root 0 May 9 11:16 file5 -rw-r--r-- 2 root root 0 May 9 11:16 file6 -rw-r--r-- 2 root root 0 May 9 11:16 file7 -rw-r--r-- 2 root root 0 May 9 11:16 file8
All of the files created in mounted volume are replicated to all the servers.
Use striped volumes only in high concurrency environments accessing very large files because striped volumes stripes data across bricks in the volume.
gluster volume create strip-volume strip 3 host1:/strip1 host2:/strip2 host3:/strip3 Creation of volume strip-volume has been successful. Please start the volume to access data.
Start the dist-volume
gluster volume start strip-volume Starting volume strip-volume has been successful
Check status of volume
gluster volume info strip-volume Volume Name: strip-volume Type: Stripe Volume ID: 2b21ad4b-d464-408b-82e8-df762ef89bcf Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: host1:/strip1 Brick2: host2:/strip2 Brick3: host3:/strip3
Now same as distributed and replicated volume access stripped volume using mount command. To mount gluster stripped volumes to access data, first we will mount it manually then add in /etc/fstab to mount it automatically whenever server restart.
Use mount command to access gluster volume.
mkdir /mnt/stripped mount.glusterfs host1.sohailriaz.com:/strip-volume /mnt/stripped/
Check it using mount command.
mount /dev/sda on / type ext4 (rw,errors=remount-ro) none on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) host1.sohailriaz.com:/dist-volume on /mnt/distributed type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) host1.sohailriaz.com:/rep-volume on /mnt/replicated type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) host1.sohailriaz.com:/strip-volume on /mnt/stripped type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
Now add following line at the end of /etc/fstab file to make it available to server on every reboot.
host1.example.com:/strip-volume /mnt/stripped glusterfs defaults,_netdev 0 0
Save the file.
Now to test create following large file in the mounted directory on client1.
dd if=/dev/zero of=/mnt/stripped/file.img bs=1024k count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 61.5011 s, 17.0 MB/s
ls -l /mnt/stripped/ total 1024120 -rw-r--r-- 1 root root 1048576000 May 9 11:31 file.img
Check on the servers for stripped functionality
[root@host1 ~]# ls -l /strip1/ total 341416 -rw-r--r-- 2 root root 1048444928 May 9 11:31 file.img [root@host2 ~]# ls -l /strip2/ total 341416 -rw-r--r-- 2 root root 1048576000 May 9 11:31 file.img [root@host3 ~]# ls -l /strip3/ total 341288 -rw-r--r-- 2 root root 1048313856 May 9 11:31 file.img
The large file is stripped across volume successfully.
Now we will see some of the common operations/maintenance you might do on gluster volumes.
As per need we can add volume to already online volumes. Here for example we will going to add new brick to our distributed volume. To do this task we will need to do following:
First probe a new server which will offer new brick to our volume. This has to be done on host1
gluster peer probe host4 Probe successful
Now add the new brick from new probed host4.
gluster volume add-brick dist-volume host4:/dist4 Add Brick successful
Check the volume information using the following command.
gluster volume info Volume Name: dist-volume Type: Distribute Volume ID: b842b03f-f8db-47e9-920a-a04c2fb24458 Status: Started Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: host1:/dist1 Brick2: host2:/dist2 Brick3: host3:/dist3 Brick4: host4:/dist4
As needed you can shrink volumes even the gluster fs is online and available. Due to some hardware failure or network unreachable one of your brick is unable in volume, you need to remove it then first start the process of removing,
gluster volume remove-brick dist-volume host2:/dist2 start Remove Brick start successful
Check the status should be completed.
gluster volume remove-brick dist-volume host2:/dist2 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 0 0Bytes 0 0 not started host3 0 0Bytes 0 0 not started host2 0 0Bytes 4 0 completed
commit the removing brick operation
gluster volume remove-brick dist-volume host2:/dist2 commit Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y Remove Brick commit successful
Check the volume information for confirmation.
gluster volume info dist-volume Volume Name: dist-volume Type: Distribute Volume ID: b842b03f-f8db-47e9-920a-a04c2fb24458 Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: host1:/dist1 Brick2: host3:/dist3
Rebalancing need to be done after expanding or shrinking the volume, this will rebalance the data amount other servers. To do this you need to issue following command
gluster volume rebalance dist-volume start Starting rebalance on volume dist-volume has been successful
To stop a volume
gluster volume stop dist-volume Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y Stopping volume dist-volume has been successful
To delete a volume
gluster volume delete dist-volume Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y Deleting volume dist-volume has been successful
Be remember to unmount the mounted directory on your clients.
]]>First you need to create an ssh key for your user
ssh-keygen -t dsa -b 1024
-t it can be either dsa or rsa, passphrase is optional, its depends on you whether you give or not. I will choose no passphrase.
Confirm you have generated the public key.
ls ~/.ssh/ id_dsa id_dsa.pub
where id_dsa is your private key and id_dsa.pub is your public key. You will need public key to upload to Dell CMC.
Fist assure you have install latest version of racadm package on your machine to do the task. The user for ssh key based authentication on CMC should be svcacct (service), others will not work.
Check before upload your ssh key to Dell CMC that it has no other keys already define, you can have 6 different keys at one time for svcacct.
To view all keys on your CMC
racadm -r dell-cmc1 -u root -p calvin sshpkauth –i svcacct –k all –v Key 1=UNDEFINED Key 2=UNDEFINED Key 3=UNDEFINED Key 4=UNDEFINED Key 5=UNDEFINED Key 6=UNDEFINED Privilege 1=0x0 Privilege 2=0x0 Privilege 3=0x0 Privilege 4=0x0 Privilege 5=0x0 Privilege 6=0x0
To view only key at a time, replace all with number (1 – 6) using -k switch,
racadm -r dell-cmc1 -u root -p calvin sshpkauth -i svacct -k 1 -v Key=UNDEFINED Privilege=0x0
To add a Public Key use follwing command
racadm -r dell-cmc1 -u root -p calvin sshpkauth –i svcacct –k 1 –p 0xfff –f ~/.ssh/id_dsa.pub PK SSH Authentication Key file successfully uploaded to the RAC
where p is for privilege (here we are giving full) and -f for the ssh public key file.
You can also add public key using key text instead of file
racadm -r dell-cmc1 -u root -p calvin sshpkauth –i svcacct –k 1 –p 0xfff –t "ssh-dss AAAAB3NzaC1kc3MAAACBAKnPsk+000MqdXpZdq0jPiMD1g5/Y5RRXcalAeantBs5HPoWUhLW10P0LH3lpZRs6niQ33C3mYPw2E40pGMC280CH0ChHZ/eU4z5RJfRZGSfvS+uB17rxJfV0rJYGX9mZl6xTY8Y8XaENW3db5uWsAm/BnopNmsrZrFwTvhK1tOLAAAAFQDCTSvMnmAWVwCEUgqesjs/35v7vwAAAIBKYv47cxWcN3i8XIWPrVs1B/1cAZcHV7X5D6m2ZHYVP/lzIl2JRWKspZT6EO/PWMaoxISsOdu0htj9yVVh/1BkQ90J7DJCtxSL6SN4arsfgrdkuC5ILkktYCERD/0LOOaIVns3f/yTHXd13lQufRziXJPw5YQhuqFMCJiXCJ9G9wAAAIBQYPiujpAEXgt5nkGBkYBp15BhhkSDIpvwhJPa3kzDixLKDr0tCdADCP+soV6ADZAmNWzlMn1Tswp51IswsyfyemrzNGubOAvHCFxCs/Z4mfeTkjbwpLf+Zpy2C7qBketOAkOFD/aP1Wm4GH62KKg/PXPqgDn3zla1ulf0fk2Vew== root@server1” PK SSH Authentication Key file successfully uploaded to the RAC
Reconfirm the public key has added using following command,
racadm -r dell-cmc1 -u root -p calvin sshpkauth -i svacct -k 1 -v Key=ssh-dss AAAAB3NzaC1kc3MAAACBAKnPsk+000MqdXpZdq0jPiMD1g5/Y5RRXcalAeantBs5HPoWUhLW10P0LH3lpZRs6niQ33C3mYPw2E40pGMC280CH0ChHZ/eU4z5RJfRZGSfvS+uB17rxJfV0rJYGX9mZl6xTY8Y8XaENW3db5uWsAm/BnopNmsrZrFwTvhK1tOLAAAAFQDCTSvMnmAWVwCEUgqesjs/35v7vwAAAIBKYv47cxWcN3i8XIWPrVs1B/1cAZcHV7X5D6m2ZHYVP/lzIl2JRWKspZT6EO/PWMaoxISsOdu0htj9yVVh/1BkQ90J7DJCtxSL6SN4arsfgrdkuC5ILkktYCERD/0LOOaIVns3f/yTHXd13lQufRziXJPw5YQhuqFMCJiXCJ9G9wAAAIBQYPiujpAEXgt5nkGBkYBp15BhhkSDIpvwhJPa3kzDixLKDr0tCdADCP+soV6ADZAmNWzlMn1Tswp51IswsyfyemrzNGubOAvHCFxCs/Z4mfeTkjbwpLf+Zpy2C7qBketOAkOFD/aP1Wm4GH62KKg/PXPqgDn3zla1ulf0fk2Vew== root@vaio Privilege=0xfff
Now issue ssh command to access with user service
ssh service@dell-cmc1 Welcome to the CMC firmware version 4.30.A00.201210301401 $
It will help you to manage large of CMC’s you have in your Data Center specially in Cluster Environment.
]]>
GPL. You can add/edit/delete script to fit your need. Please do share your edit to enhance the script.
Script is created using functions. Every check is based on function, so to perform any check or not is depends on whether you call the function or not. Just make to call function or not in the Main Section of the script and it will run as per your need. Even you can add/edit/delete as per your need to make changes in script.
Following checks are made as per my need.
You can create your own checks and add to the script for enhance checking.
Here it is
#!/bin/bash # # Script: Node Health Script # Created By: Sohail Riaz (sohaileo@gmail.com) www.sohailriaz.com # Created On: 9th April 2013 # Detail: To Check Single Node Health. # HowTo Run: chmod +x nodecheck.sh; ./nodecheck.sh # Checks: loadavg, memory, cpu, ehternet, infiniband, infiniband ipath test, cpu test, nfs mounts, nis, mcelog # Functions: You can load/unload any function you need to run or not to run. # License: GPL # ## LoadAvg Function loadavg() { LoadAvg=`uptime | awk -F "load average:" '{print $2}' | cut -f 1 -d,` echo "LoadAvg = $LoadAvg" >> $Report } ## Memory Function memory() { TOTAL_MEM=`grep "MemTotal:" /proc/meminfo | awk '{msum+=($2/1024)/1024} END {printf "%.0f",msum}'` FREE_MEM=`grep "MemFree:" /proc/meminfo | awk '{mfree+=($2/1024)/1024} END {printf "%.0f",mfree}'` TOTAL_SWAP=`grep "SwapTotal:" /proc/meminfo | awk '{ssum+=($2/1024)/1024} END {printf "%.0f",ssum}'` FREE_SWAP=`grep "SwapFree:" /proc/meminfo | awk '{sfree+=($2/1024)/1024} END {printf "%.0f",sfree}'` echo "TotalMemory = $TOTAL_MEM GB ($FREE_MEM GB Free)" >> $Report echo "TotalSwap = $TOTAL_SWAP GB ($FREE_SWAP GB Free)" >> $Report } ## CPU Function cpu() { PROCESSOR=`grep processor /proc/cpuinfo | wc -l` CPU_MODEL=`grep "model name" /proc/cpuinfo | head -n 1 | awk '{print $7 " " $8 " " $9}'` echo "Processors = $PROCESSOR" >> $Report echo "ProcessorModel = $CPU_MODEL" >> $Report } ## Ethernet Function (eth1 for me, you can edit for yours) ethernet() { ETHER_SPEED=`ethtool eth1 | grep "Speed:" | awk '{print $2}'` echo "EthernetSpeed = $ETHER_SPEED" >> $Report } ## IB Function ib() { IB_STATE=`cat /sys/class/infiniband/*/ports/1/state | awk -F ":" '{print $2}'` IB_PHYS_STATE=`cat /sys/class/infiniband/*/ports/1/phys_state | awk -F ":" '{print $2}'` IB_RATE=`cat /sys/class/infiniband/*/ports/1/rate` echo "IB_STATE = $IB_STATE" >> $Report echo "IBLink = $IB_PHYS_STATE" >> $Report echo "IBRate = $IB_RATE" >> $Report } ## IB Test Function ibtest() { IB_TEST=`ipath_pkt_test -B | awk -F ":" '{print $2}'` echo "IPathTest = $IB_TEST" >> $Report } ## NFS Mounts Function nfs() { NFS_MOUNTS=`mount -t nfs,panfs,gpfs | wc -l` echo "NFS_MOUNTS = $NFS_MOUNTS" >> $Report } ## NIS Function nis() { NIS_TEST=`ypwhich` echo "NIS_SERVER = $NIS_TEST" >> $Report } ## MCELog Test Function mcelog() { MCELog=`if [ -s /var/log/mcelog ]; then echo "Check MCELog"; else echo "No MCELog"; fi` echo "MCE Log = $MCELog" >> $Report } ### MAIN SCRIPT ## Get Node Name Hostname=`hostname -s` touch ./$Hostname-checks.txt Report=./$Hostname-checks.txt echo " " > $Report echo "Node = ${Hostname}" >> $Report echo "----------------" >> $Report ## Get Cluster Name Cluster=`echo $Hostname | cut -c1-4` ## Call Function loadavg memory cpu ethernet ib ibtest nfs nis mcelog ## Generate Report echo " " >> $Report cat $Report
or press to download. Download Script
To run the script run following command.
chmod +x nodecheck.sh ./nodecheck.sh
You can add/edit/delete any function inside script to meet your need but do share your edits to let us improve the script for maximum checks.
]]>
Enable EPEL repository using following command.
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
yum -y install ntfs-3g
The above command will install ntfs-3g package which bring NTFS support to your CentOS 6.3 installation. Just plug in your ntfs usb drives or use mount command to enable ntfs hard drive partitions.
]]>After installing from Live image to hard driver, first update your kernel and then install Development Tools group to provide all necessary tools for your system.
yum -y update kernel yum -y groupinstall "Development Tools"
We will require rpmfusion for driver installation so enable rpmfusion repository for Fedora 18 using following command.
rpm -ivh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-18.noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-18.noarch.rpm
2) Installation.
Install ATI driver using rpmfusion repository.
yum -y install akmod-catalyst
The above command will install akmod-catalyst with all dependencies require for it to run. After this you need to run following command to create new initial ram filesystem with updated ATI beta drivers.
new-kernel-pkg --kernel-args=nomodeset --mkinitrd --dracut --update $(rpm -q --queryformat="%{version}-%{release}.%{arch}\n" kernel | tail -n 1)
Reconfigure xorg using aticonfig to use ATI drivers.
aticonfig --initial -f
you are all set, just reboot and enjoying your new Fedora 18 running ATI Graphics Driver. Dont be amaze by seeing AMD Testing use only at right bottom of your screen. Its a beta driver and soon ATI will update there drivers for Fedora 18
]]>