Installing Atomic Secured Linux (ASL) on DirectAdmin and on XenServer 6.2

Had to install this for a client and ran into some “gotchas” so I thought I would make a post about it on the web just in case some other people might have questions.  I went to the IRC chat to get some answers and got alot more which I’ll mention here.

Looking at the requirements of ASL, one important thing to note is requirements.  DA uses mysql rpms from mysql.com which is supported by ASL.  Make note that as of this moment, they aren’t supporting mysql 5.6 yet so I went with mysql 5.5.  Even though the DA mysql is supported by ASL, I prefer to use the free repo by Atomic (called atomic, paid is called ASL), because of a promised release of compatible mariadb 5.5.32.  As of now, mariadb isn’t compatible with OSSEC yet so its recommended to just use mysql first til a month or 2 down the line.  Of course all this is unnecessary if you aren’t interested in this.  I just want to move to mariadb and leave mysql in the dust (eventually).

So we start off with a fresh DirectAdmin install on centOS 6 64bit.

Here’s the steps.

1.) If you are interested in using the free atomic repo and replacing mysql of DA do the following (if not just skip this step).

First modify /etc/yum.conf and find the “exclude=” and remove the “mysql” and “MySQL” otherwise you won’t be able to update.  

install the atomic repo

wget -q -O - https://www.atomicorp.com/installers/atomic | sh

install new mysql binaries

yum install mysql mysql-server

Note that the original MySQL-shared should not be removed as DA uses some of the binaries from it.  Just keep it there.

2.) install ASL.  I did find a bug which there is an easily solution for.  During the installation, even though ASL detected a DirectAdmin environment, it ends up using the paths of cpanel’s apache and and not DA’s default paths.  Because of this mod_evasive fails.  Even though you do some scary messages, ASL fixes the paths afterward which makes it possible to run ASL.  But in order to be a perfect install, putting in the symlink ahead does the trick.

ln -s /usr/lib/apache /etc/httpd/modules ln -s /var/log/httpd /etc/httpd/logs

Note that this is listed as a bug and will probably be fixed very soon so this creating of symlinks maybe unnecessary soon.

Install ASL

wget -q -O - https://www.atomicorp.com/installers/asl | sh

Just follow though and everything should works as normal.  Don’t pick to install the tortix-xen kernel if you’re using 64bit centos running on XenServer 6.2. (Note that this is a problem that seems to be exclusive to xenserver only.  Other Xen like Amazon works fine with this kernel.)

Note that, since I’m using xen, I have to use tortix-xen repo in order to install the kernel.  With the normal ASL-kernel, the VPS will not be able to boot.  One bad thing I found on xenserver 6.2 is that, the 64bit version of the tortix-xen kernel does not work and fails with the following error in the log.  ”kernel image too large: Invalid kernel”.  I looked into that but wasn’t able to find anything substantial to resolve this issue.  I’ll be bothering the awesome guys at ASL on this problem though and chip away at the problem.

FYI, the 32bit PAE tortix-xen kernel does work fine though.

For those interested, you definitely should give ASL a try, especially since there’s a solid 30 days trial.  Its an interesting product and does seem to work well.  My initial impression is good since it has a nice way to represent all the data and the firewall seems pretty easy to handle. I will be giving it some testing and we’ll see how this go.

Installing Xencloud 1.6 / Citrix Xenserver 6.1 on Onboard LSI 2208 ROC and Other Megaraid Cards With Same ROC.

We recently purchased some new motherboards with onboard lsi 2208 (which is fantastic by the way) but I ran into a road block where xencloud wasn’t able to see the storage array. I’ve ran into a similar problem before in the past with 3ware 9750 but that time, 3ware released the driver cd for xenserver so I didn’t have to do anything drastic. The lsi 2208 turns out to be used as oem on many different server brands as well (http://www.servethehome.com/lsi-sas-2208-raid-controller-information-listing) so this should work fine for all of them as well. The problem is that even though the installer cd does have the megaraid.ko file inside it, it isn’t the latest version and not one that support the latest cards. LSI does provide driver disk for citrix xenserver but the latest they have is for 6.0

As I mentioned, there are many ways to add the megaraid driver during the install. One of them is to add the driver cd during the install by hitting F9 (if lsi provides it). The other is to use provided RPMS by lsi. Some of the other more involved methods are mentioned here (http://support.citrix.com/article/CTX116379).

Initially I looked on the web for some instructions but I couldn’t find any. I decided to do a more complicated way. That is to replace the megaraid_sas.ko file in the installer cd with a new one compiled on the xencloud 1.6/xenserver 6.1 kernel. That way I only need one installer cd. Since xenserver 6.1 and xencloud 1.6 uses the same kernel, I could use the DDK provided by citrix to compile the new driver.

First get the DDK file at citrix.com

Its under the “XenServer Support Components” and labeled “DDK (Driver Development Kit)”

After you download the file, look at the different ways to import it. Of course using Xencenter is the easiest way.

http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/supplemental_pack_ddk.html#id872371

Second step is to download the source from LSI, for example I just use the link (http://www.lsi.com/products/storagecomponents/Pages/MegaRAIDSAS9260-8i.aspx) and under driver, grab the Linux 5.4 file. It comes with all the drivers including precompiled citrix xenserver 5.5 and 6.0 drivers. After you unzip the file, look for the folder dkms-2. You need the file in there. The one in my download was megaraid_sas-v06.504.01.00-2.dkms.tar.gz

Now back to the DDK.

After the import into a host and booting the DDK virtual machine, the DDK will ask you to supply the root password to the VM. Make sure to add in a virtual interface and then give it a public IP that that go out.

Now scp over the dkms tar.gz file above to the DDK VM. After you untar it, you will see 2 rpm files and 2 text files. Just ignore the dkms-2.0.21.1-1.noarch.rpm file. I didn’t want to deal with the dependencies so I just used rpmforge which installed a newer dkms version and its one that works for me as well.

rpm -ivh http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el5.rf.i386.rpm yum install dkms

Next install the megaraid dkms rpm

rpm -ivh megaraid_sas-v06.504.01.00-2dkms.noarch.rpm

The following is what you should see after installing the rpm.

[root@apricot ~]# rpm -ivh megaraid_sas-v06.504.01.00-2dkms.noarch.rpm 
Preparing...                ########################################### [100%]
   1:megaraid_sas           ########################################### [100%]

Creating symlink /var/lib/dkms/megaraid_sas/v06.504.01.00/source ->
                 /usr/src/megaraid_sas-v06.504.01.00

DKMS: add Completed.

Kernel preparation unnecessary for this kernel.  Skipping...

Building module:
cleaning build area....
make KERNELRELEASE=2.6.32.43-0.4.1.xs1.6.10.734.170748xen -C /lib/modules/2.6.32.43-0.4.1.xs1.6.10.734.170748xen/build SUBDIRS=/var/lib/dkms/megaraid_sas/v06.504.01.00/build modules.....
cleaning build area....

DKMS: build Completed.

megaraid_sas.ko:
Running module version sanity check.
 - Original module
   - Found /lib/modules/2.6.32.43-0.4.1.xs1.6.10.734.170748xen/kernel/drivers/scsi/megaraid//megaraid_sas.ko
   - Storing in /var/lib/dkms/megaraid_sas/original_module/2.6.32.43-0.4.1.xs1.6.10.734.170748xen/i686/
   - Archiving for uninstallation purposes
 - Installation
   - Installing to /lib/modules/2.6.32.43-0.4.1.xs1.6.10.734.170748xen/kernel/drivers/scsi/megaraid//
Adding any weak-modules

created /etc/modprobe.d/megaraid_sas.conf.
/etc/modprobe.d/megaraid_sas.conf: added alias reference for 'megaraid_sas'
depmod....

Saving old initrd as /boot/initrd-2.6.32.43-0.4.1.xs1.6.10.734.170748xen_old.img
Making new initrd as /boot/initrd-2.6.32.43-0.4.1.xs1.6.10.734.170748xen.img
(If next boot fails, revert to the _old initrd image)
mkinitrd....

DKMS: install Completed.

The next step is to retrieve the new megaraid_sas.ko, from the message its located in

Installing to /lib/modules/2.6.32.43-0.4.1.xs1.6.10.734.170748xen/kernel/drivers/scsi/megaraid//

Next step is to get the xencloud iso file. The latest as of this date is XCP-1.6-61809c.iso

Open it and locate the file install.img which should be at the root of the CD. Copy it out of the installer CD and get ready to work on it.

If you’re using ubuntu, I recommend su ing to the root user to avoid annoying problems.

Now we will unpack install.img and replace the megaraid file in it.

create a folder and move the file inside it (as it will get messy) and then rename it.

mkdir testing mv install.img testing cd testing mv install.img install.gz

unzip and extract the file. This will bring out many folders.

gunzip install.gz cpio -id < install

copy the megaraid_sas.ko you got from the DDK and put it in the megaraid directory of the extracted install.img file.

cp megaraid_sas.ko lib/modules/2.6.32.43-0.4.1.xs1.6.10.734.170748xen/kernel/drivers/scsi/megaraid

Now repack all the folders back into its original form

find init bin dev etc home lib opt proc root sbin sdk.answerfile share sys tmp usr var | cpio --create --format='newc' > install gzip install mv install.gz install.img

Now use a tool like ISO Master (in most distribution) and replace the original install.img in the installer. Burn the new ISO file and now you will be able to see the raid array during the xencloud/xenserver install.

For those lazy people, I’ve uploaded my install.img which you can use to make the iso. Of course I am not guaranteeing anything by putting it up ;)

http://www.darvil.com/files/install.img

On a side note, MegaRAID Storage Manager is pretty damn cool.

Migrating From Hypervm (Opensource Xen) to Citrix Xenserver

note this is a really old draft I just published so some of the stuff are old but I figure it might be useful for some people.

1st step, get VM ready for migration

ssh to the VM you’re migrating.

 yum install grub kernel-xen 
 cd /boot/grub
 wget http://files.viviotech.net/migrate/menu.lst

 ln -s /boot/grub/menu.lst /etc/grub.conf

 ln -s /boot/grub/menu.lst /boot/grub/grub.conf

 cd /etc
 wget http://files.viviotech.net/migrate/modprobe.conf

 mv /etc/fstab /etc/fstabold
 cd /etc
 wget http://files.viviotech.net/migrate/fstab

NOTE: If you want to boot up the hyperVM VM then all you have to do is put the old fstab back in and it’ll boot backup.

2nd step is to convert the LVM to img

You can use either dd to clone the partition to a disk image OR manually dd an image and use cp -rp. Suggestion is to use method 1 for all VMs that is 20 gig. If its 40 gig, method 1 is suggested if usage is over 15-20 gig otherwise method 2 is faster. Obviously for those with 80 gigs, method 2 is much faster unless the VM is using alot of space. The problem is that in some cases (IE heavily loaded platform) cp can take a substantial amount of time. Your mileage may vary. Before you do anything MAKE SURE THE VM IS SHUTDOWN

Method 1

 dd if=/dev/VolGroup00/migrationtesting_rootimg of=/root/migrationtesting.img bs=4096 conv=noerror

Method 2

Create a 10 gig image file, Change this according to how much space customer is using (IE a little more then what is being used)

 dd if=/dev/zero of=/root/migrationtesting.img bs=1M count=1 seek=10240

#format it

 mkfs.ext3 migrationtesting.img

#Mount it

 mkdir /root/migratingtesting
 mount -o loop migratingtesting.img migratingtesting

#Copy the mounted files over from hyperVM and unmount afterward

 cp -rp /home/xen/migrationtesting.vm/mnt/* /root/migratingtesting
 umount /root/migratingtesting

3nd step (Convert the image file to xva file)

NOTE: USE ONLY the NFS mount IF you don’t have enough local space for the converted file. I learned that the conversion takes quite a long time so I believe creating the xva file locally and then cping it to NFS is actually faster. This is probably the same reason as exporting from xenserver takes 3 times as long as importing. grab citrix’s python file to convert. I saved it locally here just in case.

 wget http://files.viviotech.net/migrate/xva.py

#run the file and dump the converted file to the nfs mount. -n is the name that appears when you are done importing. The converted file will not work unless --is-pv is there.

 python /root/xva.py -n migratingtesting --is-pv --disk /root/migratingtesting.img --filename=/mnt/migrateNFS/tim/migratingtesting.xva

4th step (Import, resize disk, add swap, tools)

#Import VM in the platform.

 xe vm-import filename=/mnt/migrateNFS/tim/migratingtesting.xva

#If method 2 is used then you'll have to resize the HD. Just increase it back to orginal size before booting it up and run

 resize2fs /dev/xvda

#Also add a new disk thru xencenter for the swap. It'll probably be xvdb. Run fdisk -l to make sure its xvdb
 fdisk -l

#now create and enable swap. You shouldn't need to check /etc/fstab because I've already made xvdb the swap in there.

 mkswap /dev/xvdb
 swapon /dev/xvdb
 free -m
#Double check /etc/fstab and change accordingly IF the new drive isn't xvdb.
#I notice that I wasn't able to mount the xentools. I've tarred up the xentools file for easy access (5.6 SP2)

 wget http://files.viviotech.net/migrate/Linux.tar
 tar -xvvf Linux.tar
 cd Linux
 ./install.sh

menu.lst

Just for reference, menu.lst. Obviously the latest kernel at this time is 2.6.18-238.19.1.el5xen; if that change this file will have to be edited with the new kernel.

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You do not have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /, eg.
#          root (hd0,0)
#          kernel /boot/vmlinuz-version ro root=/dev/sda1
#          initrd /boot/initrd-version.img
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-194.el5)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.18-238.19.1.el5xen ro root=/dev/xvda
        initrd /boot/initrd-2.6.18-238.19.1.el5xen.img

fstab

 /dev/xvda               /                       ext3    defaults,usrquota,grpquota        1 1
 devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
 tmpfs                   /dev/shm                tmpfs   defaults        0 0
 proc                    /proc                   proc    defaults        0 0
 sysfs                   /sys                    sysfs   defaults        0 0
 /dev/xvdb               swap                    swap    defaults        0 0

modprobe.conf

 alias eth0 xennet
 alias eth1 xennet
 alias scsi_hostadapter xenblk

SW Switch to Nginx Again and With Great Result This Time!

Referencing this

http://jk.scanmon.com/en/wp/2010/02/installing-nginx-php-fpm-using-centos-alt-ru-epel-repo-in-centos.html

rpm -ivh http://download.fedora.redhat.com/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm

and adding this REPO since mine is 64bit ;)

[rusia-repo]
name=CentOS-$releasever . rusia packages for $basearch
#baseurl=file:///raid/RH/CentOS/$releasever/local/$basearch
baseurl=http://centos.alt.ru/pub/repository/centos/5/x86_64/
enabled=1
gpgcheck=0
protect=1

yum install nginx php-fpm

nginx confg = /etc/nginx/nginx.conf php-fpm config = /etc/php-fpm.conf

Used the same conf file from last time and it worked great.

I wasn’t able to login to IPB admin forum but I discovered the problem was with php-xml

yum install php-xml did the trick.

I’m still having issues with 123flashchat with integrated URL. “ERROR – AsyncAuthUrl catched failed error: —”

Still working on that.

Well actually I fixed it.. turned out to be just a resolver issue. I installed a newer version of flashchat which gave a better error messages. Just modified /etc/resolver.conf and we’re back in business w00t!

The forum is running great though and I’m happy for the switch. Load is really low and mysqltuner returns happy all green news ;)

Also installed xcache. Reffed http://www.newmediaist.com/n/installing-nginx-mysql-php-fpm-xcache-centos-53-howto

yum install puts everything in /usr/bin so we’re good to go on defaults.

make sure to copy the admin folder from the xcache source folder and put it on a web accessible folder. Make sure you don’t forget to use md5 hash for the pass. Also I increased the 60 meg default memory.

3ware Drives DCB & U? & Others

I’ve ran into issues where drives that used to be 3ware raid arrays just don’t want to cooperate. Even running zero writes on the drives using the seagate manager does not clean out the drives.

Finally I found a command that does the job. Just run this command on a drive that refuses to work with you. Now you’re back to being the master!

dd if=/dev/zero of=/dev/sde conv=notrunc

Cancelled and Renewed

So I cancelled a server I had for a long time last month.. but less then a month later and by that I mean 2 days ago, I’ve decided to recent another server. Lets see how this one pans out. This one is for my new subsonic project.

Have to Create Partitions for XenServer 5 on Sdb

fdisk /dev/sdb

press m for help

n (make a new partition)

primary partition

1

1 to 499

p (to show the new partition)

repeat this for all 3 partitions

Have to set it to FD partition type for  (linux raid autodetect)

Press l to find the lists of types (see fd)

press t to change the ID of the partition

press numbers according to the partitions.  Change to fd when hex code is asked.

repeat for all 3 partitions

Turn sdb1 to bootable

fdisk /dev/sdb

a

1

Installing Open Vswitch Killed My Xenserver Local NFS Mount

After rpm installing the 2 rpm files (0.90.7) and rebooting the server, I got the disconnected error. Trying to repair to the NFS storage got me the error “the NFS Version is unsupported” I removed the rpms and restarted again and the same problem still exists.

I found this thread and tried 2 things. http://community.citrix.com/display/ocb/2009/01/26/How+to+configure+an+NFS+based+ISO+SR+using+local+storage+on+a+XenServer5+Host

First I shutdown the firewall and

_Works fine for me after I retired the “-l” from the /etc/sysconfig/portmap.

   old: PMAP_ARGS="-l"
   new: PMAP_ARGS=

and after restart the portmap and nfs services.

   service portmap restart
   service nfs restart

–>>> Xen Server and Xen Center version 5.5.0 (build 15119). <<<– Thanks! _

After that I was able to attach the local NFS storage. I reinstalled the rpms and tried it again. Now it rebooted up with zero issue so now I can play with open vswitch (whenever I learn how to use it).

(This is actually an old draft that I just published. Today I’m using open vswitch 1.1

Sarn IPB Forum Fixing Post Table

using phpmyadmin

Found block that points outside data file at 37462..

http://www.phpbb.com/community/viewtopic.php?t=259967

forum.ibf_posts repair info

Can’t find file: ‘ibf_profile_portal_views’ (errno..

forum.ibf_topic_views repair Error Can’t find file: ‘ibf_topic_views’ (errno: 2)

forum.ibf_topic_views repair error Corrupt

mysql>use forum; mysql> REPAIR TABLE ibf_posts USE_FRM; +——————————–+————+—————+——————————————————– ——+ | Table | Op | Msg_type | Msg_text | +——————————–+————+—————+——————————————————– ——+ | forum.ibf_posts | repair | warning | Number of rows changed from 0 to 613 244 | | forum.ibf_posts | repair | status | OK | +——————————–+————+—————+——————————————————– ——+ 2 rows in set (4 min 16.35 sec)

Issues With VM Migration (VG Remove and Readd)

Somebody setup the VG wrong. Wasn’t me ;)

lvm pvscan PV /dev/sdf1 VG VolGroup04 lvm2 [298.09 GB / 298.09 GB free] PV /dev/sde1 VG VolGroup03 lvm2 [298.09 GB / 217.96 GB free] PV /dev/sdd1 VG VolGroup02 lvm2 [298.06 GB / 197.91 GB free] PV /dev/sdc1 VG VolGroup01 lvm2 [465.75 GB / 3.03 GB free] PV /dev/sda2 VG VolGroup00 lvm2 [465.66 GB / 3.44 GB free] Total: 5 [1.78 TB] / in use: 5 [1.78 TB] / in no VG: 0 [0 ]

VMs cannot migrate from VolGroup01 and Volgroup00 as there are no free space and snapshots can’t be created. But Volgroup04 is now clear. I’ve migrated all the logical volumes (VMs) from VG04

Deactivate the volume group:

vgchange -a n VolGroup04

Now you actually remove the volume group:

vgremove VolGroup04

Note that VolGroup04 was on sdf PV /dev/sdf1 VG VolGroup04 lvm2 [298.09 GB / 298.09 GB free]

fdisk -l

Disk /dev/sdf: 320.0 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System /dev/sdf1 1 38913 312568641 83 Linux

Lets add this PV to VG01

lvm vgextend VolGroup00 /dev/sdf1

Volume group “VolGroup00” successfully extended

Lets now check on the total space

lvm vgdisplay

—– Volume group —– VG Name VolGroup00 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 40 VG Access read/write VG Status resizable MAX LV 0 Cur LV 27 Open LV 27 Max PV 0 Cur PV 2 Act PV 2 VG Size 763.72 GB

Back in business. Now the migrations will work.