Sohail Riaz http://www.sohailriaz.com Linux and Open Source Blog Wed, 13 Apr 2016 12:26:11 +0000 en-US hourly 1 https://wordpress.org/?v=4.5.15 Recover InnoDB tables using .frm and .ibd files http://www.sohailriaz.com/recover-innodb-tables-using-frm-and-ibd-files/ http://www.sohailriaz.com/recover-innodb-tables-using-frm-and-ibd-files/#comments Tue, 12 Apr 2016 16:40:56 +0000 http://www.sohailriaz.com/?p=561 In this article I will show how to recover or restore InnoDB tables using .frm and .ibd files. MySQL server was enabled with innodb_file_per_table which means  it stores InnoDB tables data in a tbl_name.ibd and table information in tbl.name.frm. If you are using MySQL 5.6.x or higher then you are using innodb_file_per_file as a default. This …]]> In this article I will show how to recover or restore InnoDB tables using .frm and .ibd files. MySQL server was enabled with innodb_file_per_table which means  it stores InnoDB tables data in a tbl_name.ibd and table information in tbl.name.frm. If you are using MySQL 5.6.x or higher then you are using innodb_file_per_file as a default. This method will work even you lost original ibdata1 file.

1- Install MySQL Server and Utilities

We will require MySQL Server 5.6.x or higher and mysqlfrm from utilities package which manage to extract table information and generate CREATE TABLE query from .frm file.

First enable oracle mysql yum repository for CentOS 7/RHEL 7. Note below rpm is the latest at the time of download, please check for latest everytime.

rpm -Uvh http://dev.mysql.com/get/mysql57-community-release-el7-7.noarch.rpm.

Install require packages

yum -y install mysql-server mysql-utilities

2- Recover.

Location for my backup database were /data-backup/sohail-wp/. Now I will recover my CREATE TABLE sql queries from .frm files using mysqlfrm utility.

a) Recover CREATE TABLE query from one table.

# mysqlfrm --server=root:root123@localhost --port=3307 --user=root /data-backup/sohail_wp/wp_users.frm

# Source on localhost: ... connected.
# Spawning server with --user=root.
# Starting the spawned server on port 3307 ... done.
# Reading .frm files
#
# Reading the wp_users.frm file.
#
# CREATE statement for wp_users.frm:
#

 CREATE TABLE `wp_users` (
 `ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
 `user_login` varchar(60) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
 `user_pass` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
 `user_nicename` varchar(50) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
 `user_email` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
 `user_url` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
 `user_registered` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
 `user_activation_key` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
 `user_status` int(11) NOT NULL DEFAULT '0',
 `display_name` varchar(250) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
 PRIMARY KEY (`ID`),
 KEY `user_login_key` (`user_login`),
 KEY `user_nicename` (`user_nicename`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci

#...done.

b) Recover CREATE TABLE query from all tables.

#mysqlfrm --server=root:root123@localhost --port=3307 --user=root /data-backup/sohail-wp/ > /tmp/create-table.sql

c) Run following command to replace empty lines with ; (simi-colons) in create-table.sql file . This is a quick and dirty method to make CREATE TABLE sql query executable on mysql prompt by importing the file otherwise you need to enter ; (semi-colon) after every CREATE TABLE sql query end one by one.

sed 's/^$/;/' /tmp/create-table.sql > /tmp/create-table-complete.sql

d) Create database same as original, in my case it is sohail_wp

mysql> create database sohail_wp;
Query OK, 1 row affected (0.00 sec)

e) Re-create all tables from create-table-complete.sql files inside sohail_wp database.

mysql> source /tmp/create-table-complete.sql

The above comamnd will generate all tables inside the default location of mysql datadir i.e /var/lib/mysql/sohail-wp with names tbl_name.frm and tbl_name.ibd.

f) As they are newly created empty table, now we will discard tablespace to get import our data from old backup from tbl_name.ibd files.

mysql> ALTER TABLE table_name DISCARD TABLESPACE;

You need to run above ALTER TABLE command for each table. It can be scripted through command line.

#mysql -u root -p -e"show tables from sohail_wp" | grep -v Tables_in_sohail_wp | while read a; do mysql -u root -p -e "ALTER TABLE sohail_wp.$a DISCARD TABLESPACE"; done

The above command will delete all *.ibd files from /var/lib/mysql/sohail_wp/ directory. Which were having empty data as newly created table. We will replace them with backup.

g) Copy only *.ibd files old backup.

# cp /data-backup/sohail_wp/*.ibd /var/lib/mysql/sohail_wp/

Here we copied all the *.ibd files from backup which contains actual table data from our old database.

h) Now its time to get import tablespace from copied *.ibd files.

mysql> ALTER TABLE table_name IMPORT TABLESPACE;

For all tables you can below script, modified as per your requirements.

#mysql -u root -p -e"show tables from sohail_wp" | grep -v Tables_in_sohail_wp | while read a; do mysql -u root -p -e "ALTER TABLE sohail_wp.$a IMPORT TABLESPACE"; done

This helps me to restore/recover my InnoDB Database. It might help others, please comment for questions.

pixelstats trackingpixel]]>
http://www.sohailriaz.com/recover-innodb-tables-using-frm-and-ibd-files/feed/ 1
Auto-Reply for Exim Mail Server http://www.sohailriaz.com/auto-reply-for-exim-mail-server/ http://www.sohailriaz.com/auto-reply-for-exim-mail-server/#comments Tue, 29 Mar 2016 07:25:08 +0000 http://www.sohailriaz.com/?p=553 Exim mail server have auto-reply or vacation autoresponder feature missing in default configuration.  It can be easily achieved by providing right Router and Transport configuration using Exim configuration. Our setup contains exim + dovecot running imap on CPanel/WHM dedicated server. Dovecot imap using maildir format and default location for storing mail is /home/vmail/$domain/$user. 1- Exim Router …]]> Exim mail server have auto-reply or vacation autoresponder feature missing in default configuration.  It can be easily achieved by providing right Router and Transport configuration using Exim configuration.

Our setup contains exim + dovecot running imap on CPanel/WHM dedicated server. Dovecot imap using maildir format and default location for storing mail is /home/vmail/$domain/$user.

1- Exim Router Configuration.

Following lines should be inserted in Router configuration area inside /etc/exim/exim.conf

######################################################################
#                  ROUTERS CONFIGURATION                             #
#
#..
# This router delivers an auto-reply "vacation" message if a file called 'vaction.msg'
# exists in the home directory.

uservacation:
  driver = redirect
  domains = +local_domains
  allow_filter
  user = vmail
  group = vmail
  file = /home/vmail/${domain}/${local_part}/.vacation.msg
  require_files = /home/vmail/${domain}/${local_part}/.vacation.msg
  # do not reply to errors or lists
  condition = ${if or { \
  {match {$h_precedence:} {(?i)junk|bulk|list}} \
  {eq {$sender_address} {}} \
  } {no} {yes}}
  # do not reply to errors or bounces or lists
  senders = ! ^.*-request@.*:\
  ! ^bounce-.*@.*:\
  ! ^.*-bounce@.*:\
  ! ^owner-.*@.*:\
  ! ^postmaster@.*:\
  ! ^webmaster@.*:\
  ! ^listmaster@.*:\
  ! ^mailer-daemon@.*:\
  ! ^root@.*
  no_expn
  reply_transport = uservacation_transport
  unseen
  no_verify

Above configuration will instruct exim to route any message if you found .vacation.msg file in the home maildir of user. You can edit the maildir location to match yours user maildir.

2- Exim Transport Configuration

Following lines should be inserted in Transport configuration area inside /etc/exim/exim.conf

######################################################################
#                TRANSPORTS CONFIGURATION                            #
#...
uservacation_transport:
  driver = autoreply
  user = vmail
  group = vmail
  file = /home/vmail/${domain}/${local_part}^/.vacation.msg
  file_expand
  once = /home/vmail/${domain}/${local_part}^/.vacation.db
  # to use a flat file instead of a db specify once_file_size
  #once_file_size = 2K
  once_repeat = 14d
  from = ${local_part}@${domain}
  to = ${sender_address}
  subject = "Re: $h_subject"

.vacation.db will save all the mail addresses to whom it auto-replied the messages. You can edit the maildir location to match yours user maildir.

3- Maildir Vacation files

Following files inside your maildir needs to be there to send reply back to the sender. In my situation is /home/vmail/$domain/$user/

.vacation.msg

# Exim filter
if ($h_subject: does not contain "SPAM?" and personal) then
mail
##### This is the only thing that a user can set when they #####
##### decide to enable vacation messaging. The vacation.msg.txt #####
expand file /home/vmail/${domain}/${local_part}/.vacation.msg.txt
log /home/vmail/${domain}/${local_part}/.vacation.log
to $reply_address
from $local_part\@$domain
subject "Unmonitored Mailbox [Re: $h_subject:]"
endif

.vacation.msg.txt

Hi,

This is test reply.

Regards,

These two files in any user maildir will makes it auto-reply.

Please use comments for any questions.

pixelstats trackingpixel]]>
http://www.sohailriaz.com/auto-reply-for-exim-mail-server/feed/ 4
Update Cobbler Signature File http://www.sohailriaz.com/update-cobbler-signature-file/ http://www.sohailriaz.com/update-cobbler-signature-file/#comments Sun, 25 Oct 2015 07:34:33 +0000 http://www.sohailriaz.com/?p=513 One of our cluster running cobbler provisioning software required to install several nodes with RHEL 7.1. When I tried to upload an ISO image of RHEL 7.1 to the cobbler, it stops and shows an error that RHEL 7.1 is not available in signatures. Signatures defines which OS cobbler can support for provisioning. By running …]]> One of our cluster running cobbler provisioning software required to install several nodes with RHEL 7.1. When I tried to upload an ISO image of RHEL 7.1 to the cobbler, it stops and shows an error that RHEL 7.1 is not available in signatures. Signatures defines which OS cobbler can support for provisioning. By running following command you can list available signatures available for your cobbler installation.

sudo cobbler signature report
Currently loaded signatures:
 debian:
    squeeze
 freebsd:
    8.2
    8.3
    9.0
 generic:
    (none)
 redhat:
    fedora16
    fedora17
    fedora18
    rhel4
    rhel5
    rhel6
 suse:
    opensuse11.2
    opensuse11.3
    opensuse11.4
    opensuse12.1
    opensuse12.2
 ubuntu:
    oneiric
    precise
    quantal
 unix:
    (none)
 vmware:
    esx4
    esxi4
    esxi5
 windows:
    (none)

9 breeds with 21 total signatures loaded

sudo cobbler signature report --name=redhat
Currently loaded signatures:
 redhat:
    fedora16
    fedora17
    fedora18
    rhel4
    rhel5
    rhel6

Breed 'redhat' has 3 total signatures

If your cluster is connected to internet, you can update signatures files straight away using following command, it will update it with latest

sudo cobbler signature update
 task started: 2015-10-21_222926_sigupdate
 task started (id=Updating Signatures, time=Wed Oct 21 22:29:26 2015)
 Successfully got file from http://cobbler.github.com/signatures/latest.json
 *** TASK COMPLETE ***

But like me if your cluster is not connected to internet directly then you will required to download latest signatures files from following website and upload to proper location. Following steps should be follow to do it

1 .Download Signature file

Download signature file on your workstation

wget http://cobbler.github.com/signatures/latest.json

2. Upload to cluster

Upload the signature to the cobbler management server

scp latest.json /var/lib/cobbler/distro_signatures.json

3. Restart Service

Restart the cobbler service to take effect.

sudo /etc/init.d/cobblerd restart

sudo cobbler signature report --name=redhat
Currently loaded signatures:
 redhat:
    cloudlinux6
    fedora16
    fedora17
    fedora18
    fedora19
    fedora20
    fedora21
    fedora22
    fedora23
    rhel4
    rhel5
    rhel6
    rhel7 

Breed 'redgat' has 3 total signatures

RHEL 7 version are available and you can go with your install.

pixelstats trackingpixel]]>
http://www.sohailriaz.com/update-cobbler-signature-file/feed/ 1
HowTo Edit Initrd.img in RHEL/CentOS 6.x http://www.sohailriaz.com/howto-edit-initrd-img-in-rhelcentos-6-x/ http://www.sohailriaz.com/howto-edit-initrd-img-in-rhelcentos-6-x/#comments Thu, 16 Apr 2015 07:40:36 +0000 http://www.sohailriaz.com/?p=501 In this howto I will describe how to edit initrd.img from iso file to add new drivers in RHEL 6.x. We face an issue to add 40G t4_tom chelsio card driver inside initrd.img to start kickstart installation from Network. The initrd.img comes with RHEL 6.x ISO doesn’t contain this driver and hence we were unable …]]> In this howto I will describe how to edit initrd.img from iso file to add new drivers in RHEL 6.x. We face an issue to add 40G t4_tom chelsio card driver inside initrd.img to start kickstart installation from Network. The initrd.img comes with RHEL 6.x ISO doesn’t contain this driver and hence we were unable to start kickstart installation from network. Chelsio provided DriverDisk but it doesn’t work because it require to provide it using any dvd or usb drive, which is impossible if you installing large number of nodes and away from your DataCenter. This method can work for any add/edit initrd.img file. For my work I used RHEL 6.5. Before doing it you should have one server running with RHEL 6.5 and updated driver.

1. Get Initrd.img from ISO

mkdir /mnt/{image,work}
mount -o loop RHEL6.5-server.x86_64.iso /mnt/image/
cp /mnt/image/isolinux/initrd.img /mnt/work

 2. Extract Initrd.img

Before extract rename initrd.img to initrd.img.xz because its compressed with xz and will remove its extension and rename again with initrd.img

cd /mnt/work
mkdir initrd-new
mv initrd.img initrd.img.xz
xz --format=lzma initrd.img.xz –decompress
cd initrd-new
cpio -ivdum < ../initrd.img

 3. Copy Required Driver

I will used already installed chelsio driver from chelsio script. We were using same directory tree,

cp  /lib/modules/2.6.32.431.el6.x86_64/updates/drivers/ /mnt/work/initrd-new/modules/2.6.32.431.el6.x86_64/updates/

 4. Update driver information from modules.* to initrd.img modules.* files.

I will used chelsio driver information here and it can be different for you. You need to confirm which hardware driver you will used to insert in initrd.img and its information from modules.* files.

cd /lib/modules/2.6.32.431.el6.x86_64/
egrep 'cxgb4|toecore|t4_tom' modules.symbols &nbsp;>> /mnt/work/initrd-new/modules/2.6.32-431.el6.x86_64/modules.symbols
egrep 'cxgb4|toecore|t4_tom' modules.alias >> /mnt/work/initrd-new/modules/2.6.32-431.el6.x86_64/modules.alias
egrep 'cxgb4|toecore|t4_tom' modules.dep >> /mnt/work/initrd-new/modules/2.6.32-431.el6.x86_64/modules.dep

5. Generate modules.*.bin files inside initrd.img

This will recreate all modules.*.bin files using required driver information using modules.* files. This required because without this initrd.img will unable to load newly inserted driver.

chroot /mnt/work/initrd-new
depmod -a -v
exit

 6. Generate updated Initrd.img

cd /mnt/work/initrd-new
find . -print |cpio -o -H newc | xz --format=lzma > ../initrd.img

Your initrd.img is ready and you can used this new initrd.img to replaced stock initrd.img to start kickstart installation or network boot.

If you have any question please use comments.

pixelstats trackingpixel]]>
http://www.sohailriaz.com/howto-edit-initrd-img-in-rhelcentos-6-x/feed/ 2
Bridged Networking with KVM in CentOS/RHEL 6.x http://www.sohailriaz.com/bridged-networking-with-kvm-in-centosrhel-6-x/ http://www.sohailriaz.com/bridged-networking-with-kvm-in-centosrhel-6-x/#respond Fri, 13 Jun 2014 05:38:09 +0000 http://www.sohailriaz.com/?p=495 In this howto we will setup bridged networking to dedicate physical network device to KVM virtual machines/servers. Its best way to have local network between virtual machines  and provide better performance. 1. Create a Bridge Before it was all manual to create bridge device but now KVM provide single command to do all. To create …]]> In this howto we will setup bridged networking to dedicate physical network device to KVM virtual machines/servers. Its best way to have local network between virtual machines  and provide better performance.

1. Create a Bridge

Before it was all manual to create bridge device but now KVM provide single command to do all. To create a bridge (br0) using eth0, you need to execute following command

virsh iface-bridge eth0 br0

This will create bridge (br0) device file in /etc/sysconfig/network-scripts/ and take all settings from eth0 to configure itself with configure eth0 to use br0.

2. Stop NetworkManager

You will also need to do following in all ifcfg-* file in /etc/sysconfig/network-scripts/ to disable NetworkManager. Edit all files and make sure to have following value

NM_CONTROLLED=no

After this stop NetworkManager and start network service to start bridge at startup.

chkconfig NetworkManager off
chkconfig network on
service NetworkManager stop
service network start

You can now create or edit your virtual machine to use bridge network device (br0).

 

pixelstats trackingpixel]]>
http://www.sohailriaz.com/bridged-networking-with-kvm-in-centosrhel-6-x/feed/ 0
GlusterFS HowTo on CentOS 6.x http://www.sohailriaz.com/glusterfs-howto-on-centos-6-x/ http://www.sohailriaz.com/glusterfs-howto-on-centos-6-x/#comments Sat, 25 May 2013 12:38:10 +0000 http://www.sohailriaz.com/?p=489 In this howto we will describe in detail how to install / configure GlusterFS 3.3.1 (latest stable) on CentOS 6.3. GlusterFS is an open source, powerful clustered file system capable of scaling to several petabytes of storage which is available to user under a single mount point. It uses already available disk filesystems like ext3, …]]> In this howto we will describe in detail how to install / configure GlusterFS 3.3.1 (latest stable) on CentOS 6.3.

GlusterFS is an open source, powerful clustered file system capable of scaling to several petabytes of storage which is available to user under a single mount point. It uses already available disk filesystems like ext3, ext4, xfs etc to store data and client will able to access the storage as local filesystem. GlusterFS cluster aggregates storage blocks over Infiniband RDMA and/or TCP/IP interconnect in a single global namespace.

We will discuss following terms later on in this howto, so it require you to understand them before proceed.

brick
The brick is the storage filesystem that has been assigned to a volume. e.g /data on server
client
The machine which mounts the volume (this may also be a server).
server
The machine (physical or virtual or bare metal) which hosts the actual filesystem in which data will be stored.
volume
A volume is a logical collection of bricks where each brick is an export directory on a server . A volume can be of several types and you can create any of them in storage pool for a single volume.
Distributed – Distributed volumes distributes files throughout the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers.
Replicated – Replicated volumes replicates files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical.
Striped – Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files.

1. Setup

Hardware

I will use three servers and one client for my GlusterFS installation/configuration. These can be a physical machines or virtual machines. I will be using my virtual environment for this and IP/hostname will be as follow.

host1.example.com 192.168.0.101
host2.example.com 192.168.0.102
host3.example.com 192.168.0.103
client1.example.com 192.168.0.1

Two partition require on each server because 1 will be using for OS install and 2nd will be using for our storage.

Software

For OS I will be using CentOS 6.3 and GlusterFS 3.3.1. In EPEL repository 3.2.7 is available but we will go with latest version i.e 3.3.1. This is available through GlusterFS own repository.

2. Installation

First we will add GlusterFS repo in our yum repositories. To do this execute following command.

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo

 2.1 Installation on Servers:

On Servers (host1, host2, host3) execute following command to install glusterfs server side packages.

yum -y install glusterfs glusterfs-fuse glusterfs-server

Start glusterfs services on all servers with enable them to start automatically on startup.

/etc/init.d/glusterd start
chkconfig glusterfsd on

 2.2 Installation on Client:

On Client execute following command to install clusterfs client side packages.

yum -y install glusterfs glusterfs-fuse

This we will later on use to mount the glusterfs on client.

 3. Creating Trusted Storage Pool.

Trusted storage pool are the servers which are running as gluster servers and will provide bricks for volumes. You will need to probe all servers to server1 (dont probe server1 or localhost).

Note: turn off your firewall using iptables -F command.

We will now create all three servers in a trusted storage pool and probing will be done on server1.

gluster peer probe host2
Probe successful

gluster peer probe host3
Probe successful

Confirm your server status.

gluster peer status
Number of Peers: 2

Hostname: host2
Uuid: b65874ab-4d06-4a0d-bd84-055ff6484efd
State: Peer in Cluster (Connected)

Hostname: host3
Uuid: 182e3214-44a2-46b3-ae79-769af40ec160
State: Peer in Cluster (Connected)

4. Creating Glusterfs Server Volume

Now its time to create glusterfs server volume. A volume is a logical collection of bricks where each brick is an export directory on a server in the trusted storage pool.

Glusterfs gives many types in storage to create the volumes within: I will demonstrate three of them defined above and it will gives you enough knowledge to create remaining by yourself.

4.1 Distributed

Use distributed volumes where you need to scale storage because in a distributed volumes files are spread randomly across the bricks in the volume.

gluster volume create dist-volume host1:/dist1 host2:/dist2 host3:/dist3
Creation of volume dist-volume has been successful. Please start the volume to access data.

Start the dist-volume

gluster volume start dist-volume
Starting volume dist-volume has been successful

Check status of volume

gluster volume info

Volume Name: dist-volume
Type: Distribute
Volume ID: b842b03f-f8db-47e9-920a-a04c2fb24458
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: host1:/dist1
Brick2: host2:/dist2
Brick3: host3:/dist3

 4.1.1 Accessing Distributed volume and testing.

Now on client1.example.com we will access and test distributed volume functionality. To mount gluster volumes to access data, first we will mount it manually then add in /etc/fstab to mount it automatically whenever server restart.

Use mount command to access gluster volume.

mkdir /mnt/distributed
mount.glusterfs host1.sohailriaz.com:/dist-volume /mnt/distributed/

Check it using mount command.

mount
/dev/sda on / type ext4 (rw,errors=remount-ro)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
host1.sohailriaz.com:/dist-volume on /mnt/distributed type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

Now add following line at the end of /etc/fstab file to make it available to server on every reboot.

host1.example.com:/dist-volume /mnt/distributed glusterfs defaults,_netdev 0 0

save the file.

Now to test create following files in the mounted directory.

touch /mnt/distributed/file1
touch /mnt/distributed/file2
touch /mnt/distributed/file3
touch /mnt/distributed/file4
touch /mnt/distributed/file5
touch /mnt/distributed/file6
touch /mnt/distributed/file7
touch /mnt/distributed/file8

Check on the servers for distributed functionality

[root@host1 ~]# ls -l /dist1
total 0
-rw-r--r-- 2 root root 0 May 9 10:50 file5
-rw-r--r-- 2 root root 0 May 9 10:51 file6
-rw-r--r-- 2 root root 0 May 9 10:51 file8

[root@host2 ~]# ls -l /dist2
total 0
-rw-r--r-- 2 root root 0 May 9 10:50 file3
-rw-r--r-- 2 root root 0 May 9 10:50 file4
-rw-r--r-- 2 root root 0 May 9 10:51 file7

[root@host3 ~]# ls -l /dist3
total 0
-rw-r--r-- 2 root root 0 May 9 10:50 file1
-rw-r--r-- 2 root root 0 May 9 10:50 file2

All of the files created in mounted volume are distributed to all the servers.

4.2 Replicated

Use replicated volumes in storage where high-availability and high-reliability are critical because replicated volumes create same copies of files across multiple bricks in the volume.

gluster volume create rep-volume replica 3 host1:/rep1 host2:/rep2 host3:/rep3
Creation of volume rep-volume has been successful. Please start the volume to access data.

Where replica 3 is a value to create number of copies on multiple servers, so here we need same copy on all servers.

Start the dist-volume

gluster volume start rep-volume
Starting volume rep-volume has been successful

Check status of volume

gluster volume info rep-volume

Volume Name: rep-volume
Type: Replicate
Volume ID: 0dcf51bc-376a-4bd2-8759-3d47bba49c3d
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: host1:/rep1
Brick2: host2:/rep2
Brick3: host3:/rep3

4.2.1 Accessing Replicated Volume and tests.

Now same as distributed volume access using mount command. To mount gluster replicated volumes to access data, first we will mount it manually then add in /etc/fstab to mount it automatically whenever server restart.

Use mount command to access gluster volume.

mkdir /mnt/replicated
mount.glusterfs host1.sohailriaz.com:/rep-volume /mnt/replicated/

Check it using mount command.

mount
/dev/sda on / type ext4 (rw,errors=remount-ro)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
host1.sohailriaz.com:/dist-volume on /mnt/distributed type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
host1.sohailriaz.com:/rep-volume on /mnt/replicated type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

Now add following line at the end of /etc/fstab file to make it available to server on every reboot.

host1.example.com:/rep-volume /mnt/replicated glusterfs defaults,_netdev 0 0

Now to test create following files in the mounted directory.

touch /mnt/replicated/file1
touch /mnt/replicated/file2
touch /mnt/replicated/file3
touch /mnt/replicated/file4
touch /mnt/replicated/file5
touch /mnt/replicated/file6
touch /mnt/replicated/file7
touch /mnt/replicated/file8

Check on the servers for replicated functionality

[root@host1 ~]# ls -l /rep1
total 0
-rw-r--r-- 2 root root 0 May 9 11:16 file1
-rw-r--r-- 2 root root 0 May 9 11:16 file2
-rw-r--r-- 2 root root 0 May 9 11:16 file3
-rw-r--r-- 2 root root 0 May 9 11:16 file4
-rw-r--r-- 2 root root 0 May 9 11:16 file5
-rw-r--r-- 2 root root 0 May 9 11:16 file6
-rw-r--r-- 2 root root 0 May 9 11:16 file7
-rw-r--r-- 2 root root 0 May 9 11:16 file8

[root@host2 ~]# ls -l /rep2
total 0
-rw-r--r-- 2 root root 0 May 9 11:16 file1
-rw-r--r-- 2 root root 0 May 9 11:16 file2
-rw-r--r-- 2 root root 0 May 9 11:16 file3
-rw-r--r-- 2 root root 0 May 9 11:16 file4
-rw-r--r-- 2 root root 0 May 9 11:16 file5
-rw-r--r-- 2 root root 0 May 9 11:16 file6
-rw-r--r-- 2 root root 0 May 9 11:16 file7
-rw-r--r-- 2 root root 0 May 9 11:16 file8

[root@host3 ~]# ls -l /rep3
total 0
-rw-r--r-- 2 root root 0 May 9 11:16 file1
-rw-r--r-- 2 root root 0 May 9 11:16 file2
-rw-r--r-- 2 root root 0 May 9 11:16 file3
-rw-r--r-- 2 root root 0 May 9 11:16 file4
-rw-r--r-- 2 root root 0 May 9 11:16 file5
-rw-r--r-- 2 root root 0 May 9 11:16 file6
-rw-r--r-- 2 root root 0 May 9 11:16 file7
-rw-r--r-- 2 root root 0 May 9 11:16 file8

All of the files created in mounted volume are replicated to all the servers.

4.3 Stripped

Use striped volumes only in high concurrency environments accessing very large files because striped volumes stripes data across bricks in the volume.

gluster volume create strip-volume strip 3 host1:/strip1 host2:/strip2 host3:/strip3
Creation of volume strip-volume has been successful. Please start the volume to access data.

Start the dist-volume

gluster volume start strip-volume
Starting volume strip-volume has been successful

Check status of volume

gluster volume info strip-volume

Volume Name: strip-volume
Type: Stripe
Volume ID: 2b21ad4b-d464-408b-82e8-df762ef89bcf
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: host1:/strip1
Brick2: host2:/strip2
Brick3: host3:/strip3

4.3.1 Accessing Stripped Volume and tests.

Now same as distributed and replicated volume access stripped volume using mount command. To mount gluster stripped volumes to access data, first we will mount it manually then add in /etc/fstab to mount it automatically whenever server restart.

Use mount command to access gluster volume.

mkdir /mnt/stripped
mount.glusterfs host1.sohailriaz.com:/strip-volume /mnt/stripped/

Check it using mount command.

mount
/dev/sda on / type ext4 (rw,errors=remount-ro)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
host1.sohailriaz.com:/dist-volume on /mnt/distributed type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
host1.sohailriaz.com:/rep-volume on /mnt/replicated type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
host1.sohailriaz.com:/strip-volume on /mnt/stripped type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

Now add following line at the end of /etc/fstab file to make it available to server on every reboot.

host1.example.com:/strip-volume /mnt/stripped glusterfs defaults,_netdev 0 0

Save the file.

Now to test create following large file in the mounted directory on client1.

dd if=/dev/zero of=/mnt/stripped/file.img bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 61.5011 s, 17.0 MB/s

ls -l /mnt/stripped/
total 1024120
-rw-r--r-- 1 root root 1048576000 May 9 11:31 file.img

Check on the servers for stripped functionality

[root@host1 ~]# ls -l /strip1/
total 341416
-rw-r--r-- 2 root root 1048444928 May 9 11:31 file.img

[root@host2 ~]# ls -l /strip2/
total 341416
-rw-r--r-- 2 root root 1048576000 May 9 11:31 file.img

[root@host3 ~]# ls -l /strip3/
total 341288
-rw-r--r-- 2 root root 1048313856 May 9 11:31 file.img

The large file is stripped across volume successfully.

5. Managing Gluster Volumes.

Now we will see some of the common operations/maintenance you might do on gluster volumes.

5.1 Expanding Volumes.

As per need we can add volume to already online volumes. Here for example we will going to add new brick to our distributed volume. To do this task we will need to do following:

First probe a new server which will offer new brick to our volume. This has to be done on host1

gluster peer probe host4
Probe successful

Now add the new brick from new probed host4.

gluster volume add-brick dist-volume host4:/dist4
Add Brick successful

Check the volume information using the following command.

gluster volume info

Volume Name: dist-volume
Type: Distribute
Volume ID: b842b03f-f8db-47e9-920a-a04c2fb24458
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: host1:/dist1
Brick2: host2:/dist2
Brick3: host3:/dist3
Brick4: host4:/dist4

5.2 Shrinking Volume

As needed you can shrink volumes even the gluster fs is online and available. Due to some hardware failure or network unreachable one of your brick is unable in volume, you need to remove it then first start the process of removing,

gluster volume remove-brick dist-volume host2:/dist2 start
Remove Brick start successful

Check the status should be completed.

gluster volume remove-brick dist-volume host2:/dist2 status

Node      Rebalanced-files size        scanned     failures    status
--------- -----------      ----------- ----------- ----------- ------------
localhost 0                0Bytes      0           0           not started
host3     0                0Bytes      0           0           not started
host2     0                0Bytes      4           0           completed

commit the removing brick operation

gluster volume remove-brick dist-volume host2:/dist2 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick commit successful

Check the volume information for confirmation.

gluster volume info dist-volume

Volume Name: dist-volume
Type: Distribute
Volume ID: b842b03f-f8db-47e9-920a-a04c2fb24458
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: host1:/dist1
Brick2: host3:/dist3

5.3 Rebalancing Volume

Rebalancing need to be done after expanding or shrinking the volume, this will rebalance the data amount other servers. To do this you need to issue following command

gluster volume rebalance dist-volume start
Starting rebalance on volume dist-volume has been successful

5.4 Stopping the Volume.

To stop a volume

gluster volume stop dist-volume
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
Stopping volume dist-volume has been successful

To delete a volume

gluster volume delete dist-volume
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
Deleting volume dist-volume has been successful

Be remember to unmount the mounted directory on your clients.

pixelstats trackingpixel]]>
http://www.sohailriaz.com/glusterfs-howto-on-centos-6-x/feed/ 14
How To Enable SSH Key Based Authentication on Dell CMC IDRAC http://www.sohailriaz.com/how-to-enable-ssh-key-based-authentication-on-dell-cmc-idrac/ http://www.sohailriaz.com/how-to-enable-ssh-key-based-authentication-on-dell-cmc-idrac/#respond Sat, 25 May 2013 11:42:45 +0000 http://www.sohailriaz.com/?p=484 In this howto I will describe how you can enable ssh key based authentication on Dell Blades CMC IDRAC. This will help to manage large number of Dell Blades CMC’s in Datacenter specially in Cluster Environment. You might be using root/calvin or your defined user/pass to login in CMC for management work using web or …]]> In this howto I will describe how you can enable ssh key based authentication on Dell Blades CMC IDRAC. This will help to manage large number of Dell Blades CMC’s in Datacenter specially in Cluster Environment. You might be using root/calvin or your defined user/pass to login in CMC for management work using web or ssh but to access Dell CMC on ssh key based authentication (password less), you need to use only service account. Dell CMC has limitation, you can only use svcacct (user=service) for ssh key based authentication.

1) Setup SSH Key on Linux

First you need to create an ssh key for your user

ssh-keygen -t dsa -b 1024

-t it can be either dsa or rsa, passphrase is optional, its depends on you whether you give or not. I will choose no passphrase.

Confirm you have generated the public key.

ls ~/.ssh/
id_dsa   id_dsa.pub

where id_dsa is your private key and id_dsa.pub is your public key. You will need public key to upload to Dell CMC.

2) Managing SSH Public Key on CMC using RACADM

Fist assure you have install latest version of racadm package on your machine to do the task. The user for ssh key based authentication on CMC should be svcacct (service), others will not work.

Check before upload your ssh key to Dell CMC that it has no other keys already define, you can have 6 different keys at one time for svcacct.

To view all keys on your CMC

racadm -r dell-cmc1 -u root -p calvin sshpkauth –i svcacct –k all –v
Key 1=UNDEFINED
Key 2=UNDEFINED
Key 3=UNDEFINED
Key 4=UNDEFINED
Key 5=UNDEFINED
Key 6=UNDEFINED
Privilege 1=0x0
Privilege 2=0x0
Privilege 3=0x0
Privilege 4=0x0
Privilege 5=0x0
Privilege 6=0x0

To view only key at a time, replace all with number (1 – 6) using -k switch,

racadm -r dell-cmc1 -u root -p calvin sshpkauth -i svacct -k 1 -v
Key=UNDEFINED
Privilege=0x0

To add a Public Key use follwing command

racadm -r dell-cmc1 -u root -p calvin sshpkauth –i svcacct –k 1 –p 0xfff –f ~/.ssh/id_dsa.pub
PK SSH Authentication Key file successfully uploaded to the RAC

where p is for privilege (here we are giving full) and -f for the ssh public key file.

You can also add public key using key text instead of file

racadm -r dell-cmc1 -u root -p calvin sshpkauth –i svcacct –k 1 –p 0xfff –t "ssh-dss AAAAB3NzaC1kc3MAAACBAKnPsk+000MqdXpZdq0jPiMD1g5/Y5RRXcalAeantBs5HPoWUhLW10P0LH3lpZRs6niQ33C3mYPw2E40pGMC280CH0ChHZ/eU4z5RJfRZGSfvS+uB17rxJfV0rJYGX9mZl6xTY8Y8XaENW3db5uWsAm/BnopNmsrZrFwTvhK1tOLAAAAFQDCTSvMnmAWVwCEUgqesjs/35v7vwAAAIBKYv47cxWcN3i8XIWPrVs1B/1cAZcHV7X5D6m2ZHYVP/lzIl2JRWKspZT6EO/PWMaoxISsOdu0htj9yVVh/1BkQ90J7DJCtxSL6SN4arsfgrdkuC5ILkktYCERD/0LOOaIVns3f/yTHXd13lQufRziXJPw5YQhuqFMCJiXCJ9G9wAAAIBQYPiujpAEXgt5nkGBkYBp15BhhkSDIpvwhJPa3kzDixLKDr0tCdADCP+soV6ADZAmNWzlMn1Tswp51IswsyfyemrzNGubOAvHCFxCs/Z4mfeTkjbwpLf+Zpy2C7qBketOAkOFD/aP1Wm4GH62KKg/PXPqgDn3zla1ulf0fk2Vew== root@server1”
PK SSH Authentication Key file successfully uploaded to the RAC

Reconfirm the public key has added using following command,

racadm -r dell-cmc1 -u root -p calvin sshpkauth -i svacct -k 1 -v
Key=ssh-dss AAAAB3NzaC1kc3MAAACBAKnPsk+000MqdXpZdq0jPiMD1g5/Y5RRXcalAeantBs5HPoWUhLW10P0LH3lpZRs6niQ33C3mYPw2E40pGMC280CH0ChHZ/eU4z5RJfRZGSfvS+uB17rxJfV0rJYGX9mZl6xTY8Y8XaENW3db5uWsAm/BnopNmsrZrFwTvhK1tOLAAAAFQDCTSvMnmAWVwCEUgqesjs/35v7vwAAAIBKYv47cxWcN3i8XIWPrVs1B/1cAZcHV7X5D6m2ZHYVP/lzIl2JRWKspZT6EO/PWMaoxISsOdu0htj9yVVh/1BkQ90J7DJCtxSL6SN4arsfgrdkuC5ILkktYCERD/0LOOaIVns3f/yTHXd13lQufRziXJPw5YQhuqFMCJiXCJ9G9wAAAIBQYPiujpAEXgt5nkGBkYBp15BhhkSDIpvwhJPa3kzDixLKDr0tCdADCP+soV6ADZAmNWzlMn1Tswp51IswsyfyemrzNGubOAvHCFxCs/Z4mfeTkjbwpLf+Zpy2C7qBketOAkOFD/aP1Wm4GH62KKg/PXPqgDn3zla1ulf0fk2Vew== root@vaio
Privilege=0xfff

 3) Access CMC

Now issue ssh command to access with user service

ssh service@dell-cmc1

Welcome to the CMC firmware version 4.30.A00.201210301401

$

It will help you to manage large of CMC’s you have in your Data Center specially in Cluster Environment.

 

pixelstats trackingpixel]]>
http://www.sohailriaz.com/how-to-enable-ssh-key-based-authentication-on-dell-cmc-idrac/feed/ 0
Node Health Check Script in Cluster http://www.sohailriaz.com/node-health-check-script-in-cluster/ http://www.sohailriaz.com/node-health-check-script-in-cluster/#respond Fri, 26 Apr 2013 12:12:53 +0000 http://www.sohailriaz.com/?p=470 Node health (hardware/software) is very critical in cluster environment. Whether you run a lot of web servers in load balancer or having compute node in High Performance Computing setup, each node is critical which is been used for any type of work. It has to be healthy to run processes we need to run. This …]]> Node health (hardware/software) is very critical in cluster environment. Whether you run a lot of web servers in load balancer or having compute node in High Performance Computing setup, each node is critical which is been used for any type of work. It has to be healthy to run processes we need to run. This script created to check node health before using as a compute node or any service in load balancer. If all checks run fine then node will be mark good and will available to run in production.

1. License

GPL. You can add/edit/delete script to fit your need. Please do share your edit to enhance the script.

2. Functions

Script is created using functions. Every check is based on function, so to perform any check or not is depends on whether you call the function or not. Just make to call function or not in the Main Section of the script and it will run as per your need. Even you can add/edit/delete as per your need to make changes in script.

3. Checks

Following checks are made as per my need.

  • Loadavg
  • Memory / Swap
  • CPU
  • Ethernet Speed
  • Infiniband
  • NFS Mounts
  • NIS Server
  • MCELog (Machine Check Event, hardware error reporting)

You can create your own checks and add to the script for enhance checking.

4. Script

Here it is

#!/bin/bash
#
# Script: Node Health Script 
# Created By: Sohail Riaz (sohaileo@gmail.com) www.sohailriaz.com
# Created On: 9th April 2013
# Detail: To Check Single Node Health. 
# HowTo Run: chmod +x nodecheck.sh; ./nodecheck.sh
# Checks: loadavg, memory, cpu, ehternet, infiniband, infiniband ipath test, cpu test, nfs mounts, nis, mcelog
# Functions: You can load/unload any function you need to run or not to run.
# License: GPL
#

## LoadAvg Function
loadavg() {
LoadAvg=`uptime | awk -F "load average:" '{print $2}' | cut -f 1 -d,`
echo "LoadAvg = $LoadAvg" >> $Report
}

## Memory Function
memory() {
TOTAL_MEM=`grep "MemTotal:" /proc/meminfo | awk '{msum+=($2/1024)/1024} END {printf "%.0f",msum}'`
FREE_MEM=`grep "MemFree:" /proc/meminfo | awk '{mfree+=($2/1024)/1024} END {printf "%.0f",mfree}'`
TOTAL_SWAP=`grep "SwapTotal:" /proc/meminfo | awk '{ssum+=($2/1024)/1024} END {printf "%.0f",ssum}'`
FREE_SWAP=`grep "SwapFree:" /proc/meminfo | awk '{sfree+=($2/1024)/1024} END {printf "%.0f",sfree}'`

echo "TotalMemory = $TOTAL_MEM GB ($FREE_MEM GB Free)" >> $Report
echo "TotalSwap = $TOTAL_SWAP GB ($FREE_SWAP GB Free)" >> $Report
}

## CPU Function
cpu() {
PROCESSOR=`grep processor /proc/cpuinfo | wc -l`
CPU_MODEL=`grep "model name" /proc/cpuinfo | head -n 1 | awk '{print $7 " " $8 " " $9}'`

echo "Processors = $PROCESSOR" >> $Report
echo "ProcessorModel = $CPU_MODEL" >> $Report
}

## Ethernet Function (eth1 for me, you can edit for yours)
ethernet() {
ETHER_SPEED=`ethtool eth1 | grep "Speed:" | awk '{print $2}'`

echo "EthernetSpeed = $ETHER_SPEED" >> $Report
}

## IB Function
ib() {
IB_STATE=`cat /sys/class/infiniband/*/ports/1/state | awk -F ":" '{print $2}'`
IB_PHYS_STATE=`cat /sys/class/infiniband/*/ports/1/phys_state | awk -F ":" '{print $2}'`
IB_RATE=`cat /sys/class/infiniband/*/ports/1/rate`

echo "IB_STATE = $IB_STATE" >> $Report
echo "IBLink = $IB_PHYS_STATE" >> $Report
echo "IBRate = $IB_RATE" >> $Report
}

## IB Test Function
ibtest() {
IB_TEST=`ipath_pkt_test -B | awk -F ":" '{print $2}'`
echo "IPathTest = $IB_TEST" >> $Report
}

## NFS Mounts Function
nfs() {
NFS_MOUNTS=`mount -t nfs,panfs,gpfs | wc -l`

echo "NFS_MOUNTS = $NFS_MOUNTS" >> $Report
}

## NIS Function
nis() {
NIS_TEST=`ypwhich`

echo "NIS_SERVER = $NIS_TEST" >> $Report
}

## MCELog Test Function
mcelog() {
MCELog=`if [ -s /var/log/mcelog ]; then echo "Check MCELog"; else echo "No MCELog"; fi`

echo "MCE Log = $MCELog" >> $Report
}

### MAIN SCRIPT

## Get Node Name
Hostname=`hostname -s`
touch ./$Hostname-checks.txt
Report=./$Hostname-checks.txt
echo " " > $Report
echo "Node = ${Hostname}" >> $Report
echo "----------------" >> $Report

## Get Cluster Name
Cluster=`echo $Hostname | cut -c1-4`

## Call Function
loadavg
memory
cpu
ethernet
ib
ibtest
nfs
nis
mcelog

## Generate Report
echo " " >> $Report
cat $Report

or press to download.     Download Script

To run the script run following command.

chmod +x nodecheck.sh
./nodecheck.sh

 5. Enhancement

You can add/edit/delete any function inside script to meet your need but do share your edits to let us improve the script for maximum checks.

 

 

 

pixelstats trackingpixel]]>
http://www.sohailriaz.com/node-health-check-script-in-cluster/feed/ 0
How To Enable NTFS Support in CentOS 6.3 http://www.sohailriaz.com/how-to-enable-ntfs-support-in-centos-6-3/ http://www.sohailriaz.com/how-to-enable-ntfs-support-in-centos-6-3/#comments Sun, 20 Jan 2013 17:40:58 +0000 http://www.sohailriaz.com/?p=378 In this how to I will describe how to enable NTFS support in CentOS 6.3. By default CentOS 6.x doesnt comes with NTFS support to mount NTFS partition either on hard disk or usb drives. Fedora provies EPEL repository for Red Hat Enterprise Linux. EPEL (Extra Packages for Enterprise Linux) is a volunteer-based community effort …]]> In this how to I will describe how to enable NTFS support in CentOS 6.3. By default CentOS 6.x doesnt comes with NTFS support to mount NTFS partition either on hard disk or usb drives. Fedora provies EPEL repository for Red Hat Enterprise Linux. EPEL (Extra Packages for Enterprise Linux) is a volunteer-based community effort from the Fedora project to create a repository of high-quality add-on packages that complement the Fedora-based Red Hat Enterprise Linux (RHEL) and its compatible spinoffs, such as CentOS and Scientific Linux. NTFS drivers ntfs-3g is available through EPEL repository.

1) Preparation

Enable EPEL repository using following command.

rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm

2) Install NTFS Drivers.

yum -y install ntfs-3g

The above command will install ntfs-3g package which bring NTFS support to your CentOS 6.3 installation. Just plug in your ntfs usb drives or use mount command to enable ntfs hard drive partitions.

pixelstats trackingpixel]]>
http://www.sohailriaz.com/how-to-enable-ntfs-support-in-centos-6-3/feed/ 42
How To Install ATI Radeon Drivers in Fedora 18 http://www.sohailriaz.com/how-to-install-ati-radeon-drivers-in-fedora-18/ http://www.sohailriaz.com/how-to-install-ati-radeon-drivers-in-fedora-18/#comments Sun, 20 Jan 2013 06:49:03 +0000 http://www.sohailriaz.com/?p=372 In this how to I will describe how to install ATI Radeon Drivers in Fedora 18. Fedora 18 fresh out and ATI latest driver version 13.1 dont support it. I also checked the older version i.e 12.8 and 12.6 but all of them unable to install on Fedora 18. In future they will provide right …]]> In this how to I will describe how to install ATI Radeon Drivers in Fedora 18. Fedora 18 fresh out and ATI latest driver version 13.1 dont support it. I also checked the older version i.e 12.8 and 12.6 but all of them unable to install on Fedora 18. In future they will provide right drivers for Fedora 18 but for now you can install akmod-catalyst beta driver from rpmfusion repository.

1) Preparation.

After installing from Live image to hard driver, first update your kernel and then install Development Tools group to provide all necessary tools for your system.

yum -y update kernel
yum -y groupinstall "Development Tools"

We will require rpmfusion for driver installation so enable rpmfusion repository for Fedora 18 using following command.

rpm -ivh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-18.noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-18.noarch.rpm

2) Installation.

Install ATI driver using rpmfusion repository.

yum -y install akmod-catalyst

The above command will install akmod-catalyst with all dependencies require for it to run. After this you need to run following command to create new initial ram filesystem with updated ATI beta drivers.

new-kernel-pkg --kernel-args=nomodeset --mkinitrd --dracut --update $(rpm -q --queryformat="%{version}-%{release}.%{arch}\n" kernel | tail -n 1)

Reconfigure xorg using aticonfig to use ATI drivers.

aticonfig --initial -f

you are all set, just reboot and enjoying your new Fedora 18 running ATI Graphics Driver. Dont be amaze by seeing AMD Testing use only at right bottom of your screen. Its a beta driver and soon ATI will update there drivers for Fedora 18

pixelstats trackingpixel]]>
http://www.sohailriaz.com/how-to-install-ati-radeon-drivers-in-fedora-18/feed/ 40