Installing Suricata IDS from source – CentOS 6.4

This post has been updated: 31/05/2013. I’ve tested the installation steps with CentOS 6.4. It also works with CentOS 6.3

Today I’ll compile Suricata in a clean CentOS 6.4 server. I’m not compiling it with PFRING support (it would increase performance). Ok, hands on:

  1. You’ll need the EPEL repository, see the step 2 from this post.
  2. I’ll install Development Tools group and some packages needed by Suricata.
    yum groupinstall "Development Tools"
    yum install pcre-devel libyaml-devel libnet-devel libpcap-devel libcap-ng-devel file-devel zlib-devel
  3. Download Suricata from its web page. Move the tar.gz file to a suitable directory, in my case I’ve chosen /opt directory.
  4. Uncompress it (I’m compiling 1.4.3 version) and configure the compilation. I’ve set some prefixes and directories and added the  –disable-gccmarch as I was having problems (Illegal Instruction) when executing Suricata on my QEMU/KVM virtual machine (the post that helped me).
    tar xvfz suricata-1.4.3.tar.gz
    cd suricata-1.4.3
    ./configure --prefix=/usr/ --sysconfdir=/etc/ --localstatedir=/var/ --disable-gccmarch-native
    
  5. Ok. Now let’s use make, make install and if you want Suricata to create a config file and download rules from Emerging Threats use make install-full.
    make
    make install
    make install-full
    ldconfig
  6. And finally let’s try to execute suricata command.
    [root@sherlock ~]# suricata -V
    This is Suricata version 1.4.3 RELEASE

I read in ntop’s web page that virtual pf_ring would improve performance dramatically for virtualization environments like KVM but I have no money now to pay for the fee (if you want to donate let me know :-D) so I’ll try to use it for a few minutes as they suggest for evaluation purposes.

As always I appreciate any comments to improve the quality of this post. Enjoy!

Advertisements

Resizing a QEMU KVM Linux image using virt-resize in CentOS 6.4

Hi,
today my colleague Geoff told me that he was trying to resize an OpenNebula image. As I had never done that before, I started to review OpenNebula documentation and I found this email in the OpenNebula mail list (please suscribe!, it’s really useful!). It seems that resizing is not yet supported (maybe I’m wrong and there’s another solution!) but thanks to this issue information I found virt-resize.

This is virt-resize description: a tool which can resize a virtual machine disk, making it larger or smaller overall, and resizing or deleting any partitions contained within. Looks promising! The virt-resize information page is full of examples so it has been easy to start using it.

Ok this is what I’ve tested and please remember proceed with caution I’m no responsible of any damage caused by following this steps and try to read the documentation first I’ve just used that info :-D.

  1. I’ve installed the libguestfs-tools package in my OpenNebula host running CentOS: yum install libguestfs-tools
  2. I halted one VM as I want to increase the size of its main disk image (the VM can’t be running!)
  3. Using Sunstone I’ve found where the disk is located inside the datastore: Virtual Machines -> Select the VM -> Template tab -> Disk section (e.g: /var/lib/one/datastores/1/1a8a07d0382566a89afd96a134eb04cf)
  4. Now I’ve inspected the image file with the virt-filesystems command.
    [root@haddock ~]# cd /var/lib/one/datastores/1/
    [root@haddock 1]# virt-filesystems --long --parts --blkdevs -h -a 1a8a07d0382566a89afd96a134eb04cf
    Name Type MBR Size Parent
    /dev/sda1 partition 83 500M /dev/sda
    /dev/sda2 partition 8e 20G /dev/sda
    /dev/sda device - 20G -
  5. I want to have a 30GB size for my /dev/sda2 partition, so I create a file called newdisk with the truncate command:
    truncate -s 30G newdisk
  6. I create a backup of the OpenNebula image:   cp 1a8a07d0382566a89afd96a134eb04cf tmp
  7. Ok! This is very important I have a logical volume called /dev/vg_moriarty/lv_root, so today I’m not only resizing the /dev/sda2 partition I also want the logical volume to use the new extra space:
    virt-resize tmp newdisk --expand /dev/sda2 --LV-expand /dev/mapper/vg_moriarty-lv_root
  8. This is a screenshot showing the resizing progress.
    resizing
  9. Once the resizing progress is finished I substitute the original image with the newdisk file and I change the permissions:
    mv newdisk 1a8a07d0382566a89afd96a134eb04cf
    chown oneadmin:oneadmin 1a8a07d0382566a89afd96a134eb04cf
  10. I check if the partitions have been modified before running the VM. It seems that the /dev/sda2 is now 30GB.
  11. [root@haddock 1]# virt-filesystems --long --parts --blkdevs -h -a 1a8a07d0382566a89afd96a134eb04cf
    Name Type MBR Size Parent
    /dev/sda1 partition 83 500M /dev/sda
    /dev/sda2 partition 8e 30G /dev/sda
    /dev/sda device - 30G -
  12. Ok. I start the VM to check if it boots and the logical volume is fine (28GB!)
    moriarty_after
  13. It works!! Thanks to OpenNebula’s mailing list (Simon Boulet, Ricardo Duarte, Ruben S. Montero…) and the libguestfs creators I’ve found a way to resize a Linux image.

Please if you know a better way or OpenNebula has already a way to modify the size please let me know, I want this post to be useful for the community.

Enjoy!

P.S: Thanks Geoff!

Vyatta quick commands

Here are some commands that you may find useful when using Vyatta:

  • Set a Default Gateway: set system gateway-address x.x.x.x where x.x.x.x is an IPv4 address.
  • Set a hostname: set system host-name a_host_name
  • Set the domain name: set system domain-name a_domain_name
  • Change password for vyatta user: set system login user vyatta authentication plaintext-password a_new_password

Have a nice day!

Tip: Installing Windows Server 2012 (Evaluation) in OpenNebula 4 with KVM

Hi,
I’d like to evaluate Windows Server 2013 so I’ve decide to create a VM in my OpenNebula 4 lab. I’ve read in this page that I’d need VirtIO signed drivers for Windows in order to detect the virtual hard disk which will store the OS.

I’ve created a template with the following storage (DISKS):

  1. A CDROM (PREFIX hd) with the Windows Server 2012 ISO
  2. A CDROM (PREFIX hd) with the Windows stable drivers ISO from Fedora
  3. A OS HDD virtual disk: DRIVER: raw and PREFIX: vd

After instantiating the template, Windows starts the installation. When Windows warns you that no disk is found you can load the driver from the second CDROM.

Imagen

Then you have to browse the CDROM and select the SCSI RedHat VirtIO SCSI controller (WLH/AMD64 folder). Now the Virtual Hard Disk has been detected and you can install the Operating System.

Imagen

Enjoy!

OpenNebula 4 – Upgrade to 4.0.1 in CentOS 6.4

OpenNebula has released a maintenance release for OpenNebula 4 Eagle. If you are running like me OpenNebula 4.0.0 version these are the steps I’ve followed to upgrade (I’m using the SQLite configuration database):

  1. Download the tar.gz file from OpenNebula’s download page.
  2. Extract the rpm files: tar xvfz CentOS-6.4-opennebula-4.0.1-1.tar.gz
  3. CD to the directory: cd opennebula-4.0.1-1/
  4. Upgrade to the latest rpm versions: yum localinstall opennebula-common-4.0.1-1.x86_64.rpm opennebula-ruby-4.0.1-1.x86_64.rpm opennebula-4.0.1-1.x86_64.rpm opennebula-sunstone-4.0.1-1.x86_64.rpm opennebula-server-4.0.1-1.x86_64.rpm opennebula-node-kvm-4.0.1-1.x86_64.rpm
  5. Switch to oneadmin user: su oneadmin
  6. Upgrade the SQLite database as oneadmin: onedb upgrade -v –sqlite /var/lib/one/one.db
  7. Switch back to root user and restart OpenNebula services: service opennebula restart, service opennebula-sunstone restart
  8. Refresh your browser if you were working with Sunstone.

That’s all!

Installing Ozones Server in CentOS 6.4

Ok, as time goes by… I want to learn about advanced topics from OpenNebula. I’m going to start working with Opennebula Zones (ozones) which will allow me to create a Virtual Data Center. As I have only one machine in my lab (I accept hardware donations to increase my lab potential 🙂 ) I will have only one zone to play but that’s better than nothing.

If you are interested in OpenNebula Zones and Virtual Data Centers please read:

Disclaimer: the following configuration steps will help you to to run the ozones-server in a development environment, if you want to use ozones-server in a production environment please check first how to protect your Apache server conveniently (e.g disable unneeded modules)

Ok. These are the steps I’ve followed:

// Download the OpenNebula rpm packages 

#yum localinstall opennebula-common-4.0.0-1.x86_64.rpm
#yum localinstall opennebula-ruby-4.0.0-1.x86_64.rpm 
#yum localinstall opennebula-ozones-4.0.0-1.x86_64.rpm

// Install the Apache package
#yum install httpd

// Configure the service so it's started at boot
#chkconfig httpd on

// Let's add an iptables rules so http traffic is allowed
#iptables -I INPUT -m tcp -p tcp --dport 80 -m state --state=NEW,ESTABLISHED,RELATED -j ACCEPT
#service iptables save

// Edit your /etc/httpd/conf/httpd.conf and change some default parameters like your 
// ServerName, ServerSignature Off... The rewrite and the http proxy modules are enabled 
// by default

// Let's create a configuration file e.g /etc/httpd/conf.d/ozones.conf where 
// reverse proxy directives are configured. Add this lines to the file:

ProxyPass /ozones/ http://localhost:6121/
ProxyPassReverse /ozones/ http://localhost:6121/
ProxyRequests Off

// Start your Apache server
#service httpd start

// If you have SELinux enabled we must allow Apache to start network connections:
#setsebool -P httpd_can_network_connect 1

// Now let's prepare some things to start the ozones server
// Add a user:password line into a file e.g ozonesadmin:ozonepassword and set permissions for 
// oneadmin user.
#echo ozonesadmin:ozonepassword > /var/lib/one/.one/ozones_auth
#chown oneadmin:oneadmin /var/lib/one/.one/ozones_auth

// OK!!! NOW USE THE ONEADMIN ACCOUNT
#su oneadmin

// The first time you start the ozones-server you must set at leat the OZONES_AUTH env variable
// so the database is created with the right credentials. I'm using the default sqlite database
// If you want to change the port and ip address for the server or the database server edit the
// /etc/one/ozones-server.conf file

$export OZONES_AUTH=/var/lib/one/.one/ozones_auth
$export OZONES_URL="http://localhost:6121"

// Let's start and check if the ozones server is listening

$ /usr/bin/ozones-server start
$ netstat -ntap | grep 6121
  tcp 0 0 127.0.0.1:6121 0.0.0.0:* LISTEN 20203/ruby

Great. If the apache proxy module works fine and the ozones-server is running we will be able to log into the ozones gui.

I’ve configured a proxy directive so http://myservername/ozones/ is sent to http://localhost:6121 where ozones-server is listening by default, if you use http://myservername/ozones it won’t work (css, javascript… will be missing). Remember that your user and password are configured in the authentication file you’ve created (in my case is located in /var/lib/one/.one/ozones_auth).

Finally some screenshots:

ozones_server_login ozones_dashboard

In a few days, I’ll play with the zones configuration.

Enjoy!

CentOS 6.4 – Contextualization using OpenNebula’s init scripts

Hi,
today I’m offering you a simple contextualization example in OpenNebula using C12G Labs scripts. Contextualization is explained in the official documentation so please read it.

In my example I have installed a CentOS 6.4 virtual machine using a netinstall ISO image, then I’ve downloaded from the OpenNebula download page the current rpm files for OpenNebula. After uncompressing the tar.gz file I have installed the opennebula-context rpm package which provides the contextualization scripts.

tar xvfz CentOS-6.4-opennebula-4.0.0-1.tar.gz 
cd opennebula-4.0.0-1/context/
yum localinstall opennebula-context-4.0.0-1.x86_64.rpm

The rpm package will create:

  • An init.d script called vmcontext
  • A directory /etc/one-context.d which contains the scripts that will configure the network interfaces, the dns servers and the public ssh key.

OK. I’ve prepared a template with two network interfaces using Sunstone’s wizard. In the Context section I’ve checked the “Add SSH contextualization” and I’ve copied my public rsa key. This ssh contextualization will allow me to log in the new virtual machine as root with my ssh key.

The “Add Network contextualization” is also checked and it’ll create the network scripts for the NICs using the last four octets of the NIC’s MAC address to set a network address.

context_sshAnd now in the Custom variables section I’ve added two variables that I deliberately forgot to configure when defining my virtual networks, a variable to set a default gateway for one of the NICs and the DNS server for the virtual machine.

context_variables

Once I finish the template I instantiate it to start a new virtual machine. When the VM has booted I check the configuration for NICs, default gateway, dns and ssh and et voilà the contextualization scripts have already configured all automagically, how nice!

context_works

There are two 169.254.0.0 lines in the netstat -rn, probably because of the default dhcp configuration for NICs, nothing related with the contextualization scripts.

I hope this helps you to understand why contextualization is a nice feature. Enjoy!