Installing Suricata IDS from source – CentOS 6.4

This post has been updated: 31/05/2013. I’ve tested the installation steps with CentOS 6.4. It also works with CentOS 6.3

Today I’ll compile Suricata in a clean CentOS 6.4 server. I’m not compiling it with PFRING support (it would increase performance). Ok, hands on:

  1. You’ll need the EPEL repository, see the step 2 from this post.
  2. I’ll install Development Tools group and some packages needed by Suricata.
    yum groupinstall "Development Tools"
    yum install pcre-devel libyaml-devel libnet-devel libpcap-devel libcap-ng-devel file-devel zlib-devel
  3. Download Suricata from its web page. Move the tar.gz file to a suitable directory, in my case I’ve chosen /opt directory.
  4. Uncompress it (I’m compiling 1.4.3 version) and configure the compilation. I’ve set some prefixes and directories and added the  –disable-gccmarch as I was having problems (Illegal Instruction) when executing Suricata on my QEMU/KVM virtual machine (the post that helped me).
    tar xvfz suricata-1.4.3.tar.gz
    cd suricata-1.4.3
    ./configure --prefix=/usr/ --sysconfdir=/etc/ --localstatedir=/var/ --disable-gccmarch-native
  5. Ok. Now let’s use make, make install and if you want Suricata to create a config file and download rules from Emerging Threats use make install-full.
    make install
    make install-full
  6. And finally let’s try to execute suricata command.
    [root@sherlock ~]# suricata -V
    This is Suricata version 1.4.3 RELEASE

I read in ntop’s web page that virtual pf_ring would improve performance dramatically for virtualization environments like KVM but I have no money now to pay for the fee (if you want to donate let me know :-D) so I’ll try to use it for a few minutes as they suggest for evaluation purposes.

As always I appreciate any comments to improve the quality of this post. Enjoy!


Resizing a QEMU KVM Linux image using virt-resize in CentOS 6.4

today my colleague Geoff told me that he was trying to resize an OpenNebula image. As I had never done that before, I started to review OpenNebula documentation and I found this email in the OpenNebula mail list (please suscribe!, it’s really useful!). It seems that resizing is not yet supported (maybe I’m wrong and there’s another solution!) but thanks to this issue information I found virt-resize.

This is virt-resize description: a tool which can resize a virtual machine disk, making it larger or smaller overall, and resizing or deleting any partitions contained within. Looks promising! The virt-resize information page is full of examples so it has been easy to start using it.

Ok this is what I’ve tested and please remember proceed with caution I’m no responsible of any damage caused by following this steps and try to read the documentation first I’ve just used that info :-D.

  1. I’ve installed the libguestfs-tools package in my OpenNebula host running CentOS: yum install libguestfs-tools
  2. I halted one VM as I want to increase the size of its main disk image (the VM can’t be running!)
  3. Using Sunstone I’ve found where the disk is located inside the datastore: Virtual Machines -> Select the VM -> Template tab -> Disk section (e.g: /var/lib/one/datastores/1/1a8a07d0382566a89afd96a134eb04cf)
  4. Now I’ve inspected the image file with the virt-filesystems command.
    [root@haddock ~]# cd /var/lib/one/datastores/1/
    [root@haddock 1]# virt-filesystems --long --parts --blkdevs -h -a 1a8a07d0382566a89afd96a134eb04cf
    Name Type MBR Size Parent
    /dev/sda1 partition 83 500M /dev/sda
    /dev/sda2 partition 8e 20G /dev/sda
    /dev/sda device - 20G -
  5. I want to have a 30GB size for my /dev/sda2 partition, so I create a file called newdisk with the truncate command:
    truncate -s 30G newdisk
  6. I create a backup of the OpenNebula image:   cp 1a8a07d0382566a89afd96a134eb04cf tmp
  7. Ok! This is very important I have a logical volume called /dev/vg_moriarty/lv_root, so today I’m not only resizing the /dev/sda2 partition I also want the logical volume to use the new extra space:
    virt-resize tmp newdisk --expand /dev/sda2 --LV-expand /dev/mapper/vg_moriarty-lv_root
  8. This is a screenshot showing the resizing progress.
  9. Once the resizing progress is finished I substitute the original image with the newdisk file and I change the permissions:
    mv newdisk 1a8a07d0382566a89afd96a134eb04cf
    chown oneadmin:oneadmin 1a8a07d0382566a89afd96a134eb04cf
  10. I check if the partitions have been modified before running the VM. It seems that the /dev/sda2 is now 30GB.
  11. [root@haddock 1]# virt-filesystems --long --parts --blkdevs -h -a 1a8a07d0382566a89afd96a134eb04cf
    Name Type MBR Size Parent
    /dev/sda1 partition 83 500M /dev/sda
    /dev/sda2 partition 8e 30G /dev/sda
    /dev/sda device - 30G -
  12. Ok. I start the VM to check if it boots and the logical volume is fine (28GB!)
  13. It works!! Thanks to OpenNebula’s mailing list (Simon Boulet, Ricardo Duarte, Ruben S. Montero…) and the libguestfs creators I’ve found a way to resize a Linux image.

Please if you know a better way or OpenNebula has already a way to modify the size please let me know, I want this post to be useful for the community.


P.S: Thanks Geoff!

Vyatta quick commands

Here are some commands that you may find useful when using Vyatta:

  • Set a Default Gateway: set system gateway-address x.x.x.x where x.x.x.x is an IPv4 address.
  • Set a hostname: set system host-name a_host_name
  • Set the domain name: set system domain-name a_domain_name
  • Change password for vyatta user: set system login user vyatta authentication plaintext-password a_new_password

Have a nice day!

Tip: Installing Windows Server 2012 (Evaluation) in OpenNebula 4 with KVM

I’d like to evaluate Windows Server 2013 so I’ve decide to create a VM in my OpenNebula 4 lab. I’ve read in this page that I’d need VirtIO signed drivers for Windows in order to detect the virtual hard disk which will store the OS.

I’ve created a template with the following storage (DISKS):

  1. A CDROM (PREFIX hd) with the Windows Server 2012 ISO
  2. A CDROM (PREFIX hd) with the Windows stable drivers ISO from Fedora
  3. A OS HDD virtual disk: DRIVER: raw and PREFIX: vd

After instantiating the template, Windows starts the installation. When Windows warns you that no disk is found you can load the driver from the second CDROM.


Then you have to browse the CDROM and select the SCSI RedHat VirtIO SCSI controller (WLH/AMD64 folder). Now the Virtual Hard Disk has been detected and you can install the Operating System.



OpenNebula 4 – Upgrade to 4.0.1 in CentOS 6.4

OpenNebula has released a maintenance release for OpenNebula 4 Eagle. If you are running like me OpenNebula 4.0.0 version these are the steps I’ve followed to upgrade (I’m using the SQLite configuration database):

  1. Download the tar.gz file from OpenNebula’s download page.
  2. Extract the rpm files: tar xvfz CentOS-6.4-opennebula-4.0.1-1.tar.gz
  3. CD to the directory: cd opennebula-4.0.1-1/
  4. Upgrade to the latest rpm versions: yum localinstall opennebula-common-4.0.1-1.x86_64.rpm opennebula-ruby-4.0.1-1.x86_64.rpm opennebula-4.0.1-1.x86_64.rpm opennebula-sunstone-4.0.1-1.x86_64.rpm opennebula-server-4.0.1-1.x86_64.rpm opennebula-node-kvm-4.0.1-1.x86_64.rpm
  5. Switch to oneadmin user: su oneadmin
  6. Upgrade the SQLite database as oneadmin: onedb upgrade -v –sqlite /var/lib/one/one.db
  7. Switch back to root user and restart OpenNebula services: service opennebula restart, service opennebula-sunstone restart
  8. Refresh your browser if you were working with Sunstone.

That’s all!

Installing Ozones Server in CentOS 6.4

Ok, as time goes by… I want to learn about advanced topics from OpenNebula. I’m going to start working with Opennebula Zones (ozones) which will allow me to create a Virtual Data Center. As I have only one machine in my lab (I accept hardware donations to increase my lab potential 🙂 ) I will have only one zone to play but that’s better than nothing.

If you are interested in OpenNebula Zones and Virtual Data Centers please read:

Disclaimer: the following configuration steps will help you to to run the ozones-server in a development environment, if you want to use ozones-server in a production environment please check first how to protect your Apache server conveniently (e.g disable unneeded modules)

Ok. These are the steps I’ve followed:

// Download the OpenNebula rpm packages 

#yum localinstall opennebula-common-4.0.0-1.x86_64.rpm
#yum localinstall opennebula-ruby-4.0.0-1.x86_64.rpm 
#yum localinstall opennebula-ozones-4.0.0-1.x86_64.rpm

// Install the Apache package
#yum install httpd

// Configure the service so it's started at boot
#chkconfig httpd on

// Let's add an iptables rules so http traffic is allowed
#iptables -I INPUT -m tcp -p tcp --dport 80 -m state --state=NEW,ESTABLISHED,RELATED -j ACCEPT
#service iptables save

// Edit your /etc/httpd/conf/httpd.conf and change some default parameters like your 
// ServerName, ServerSignature Off... The rewrite and the http proxy modules are enabled 
// by default

// Let's create a configuration file e.g /etc/httpd/conf.d/ozones.conf where 
// reverse proxy directives are configured. Add this lines to the file:

ProxyPass /ozones/ http://localhost:6121/
ProxyPassReverse /ozones/ http://localhost:6121/
ProxyRequests Off

// Start your Apache server
#service httpd start

// If you have SELinux enabled we must allow Apache to start network connections:
#setsebool -P httpd_can_network_connect 1

// Now let's prepare some things to start the ozones server
// Add a user:password line into a file e.g ozonesadmin:ozonepassword and set permissions for 
// oneadmin user.
#echo ozonesadmin:ozonepassword > /var/lib/one/.one/ozones_auth
#chown oneadmin:oneadmin /var/lib/one/.one/ozones_auth

#su oneadmin

// The first time you start the ozones-server you must set at leat the OZONES_AUTH env variable
// so the database is created with the right credentials. I'm using the default sqlite database
// If you want to change the port and ip address for the server or the database server edit the
// /etc/one/ozones-server.conf file

$export OZONES_AUTH=/var/lib/one/.one/ozones_auth
$export OZONES_URL="http://localhost:6121"

// Let's start and check if the ozones server is listening

$ /usr/bin/ozones-server start
$ netstat -ntap | grep 6121
  tcp 0 0* LISTEN 20203/ruby

Great. If the apache proxy module works fine and the ozones-server is running we will be able to log into the ozones gui.

I’ve configured a proxy directive so http://myservername/ozones/ is sent to http://localhost:6121 where ozones-server is listening by default, if you use http://myservername/ozones it won’t work (css, javascript… will be missing). Remember that your user and password are configured in the authentication file you’ve created (in my case is located in /var/lib/one/.one/ozones_auth).

Finally some screenshots:

ozones_server_login ozones_dashboard

In a few days, I’ll play with the zones configuration.


CentOS 6.4 – Contextualization using OpenNebula’s init scripts

today I’m offering you a simple contextualization example in OpenNebula using C12G Labs scripts. Contextualization is explained in the official documentation so please read it.

In my example I have installed a CentOS 6.4 virtual machine using a netinstall ISO image, then I’ve downloaded from the OpenNebula download page the current rpm files for OpenNebula. After uncompressing the tar.gz file I have installed the opennebula-context rpm package which provides the contextualization scripts.

tar xvfz CentOS-6.4-opennebula-4.0.0-1.tar.gz 
cd opennebula-4.0.0-1/context/
yum localinstall opennebula-context-4.0.0-1.x86_64.rpm

The rpm package will create:

  • An init.d script called vmcontext
  • A directory /etc/one-context.d which contains the scripts that will configure the network interfaces, the dns servers and the public ssh key.

OK. I’ve prepared a template with two network interfaces using Sunstone’s wizard. In the Context section I’ve checked the “Add SSH contextualization” and I’ve copied my public rsa key. This ssh contextualization will allow me to log in the new virtual machine as root with my ssh key.

The “Add Network contextualization” is also checked and it’ll create the network scripts for the NICs using the last four octets of the NIC’s MAC address to set a network address.

context_sshAnd now in the Custom variables section I’ve added two variables that I deliberately forgot to configure when defining my virtual networks, a variable to set a default gateway for one of the NICs and the DNS server for the virtual machine.


Once I finish the template I instantiate it to start a new virtual machine. When the VM has booted I check the configuration for NICs, default gateway, dns and ssh and et voilà the contextualization scripts have already configured all automagically, how nice!


There are two lines in the netstat -rn, probably because of the default dhcp configuration for NICs, nothing related with the contextualization scripts.

I hope this helps you to understand why contextualization is a nice feature. Enjoy!

OpenNebula 4 with KVM and Openvswitch using only one server

As I’ve only one server, I’m forced to install OpenNebula and KVM virtualization in the same machine. If you want to know how I configured and installed openvswitch read my previous posts.

Let’s begin installing some packages:

yum install qemu-kvm qemu-kvm-tools libvirt virt-manager

Install the opennebula-node-kvm rpm package (read my previous post for more information) as it’ll configure for you  qemu and a policy allowing oneadmin user to use the virtualization api.

yum localinstall opennebula-node-kvm-4.0.1-1.x86_64.rpm

Start the libvirtd service and configure it to start at boot

#/etc/init.d/libvirtd start
Starting daemon libvirtd: [ OK ]
# chkconfig libvirtd on

Warning: if you’re using SELinux run this command so the authorized keys is accesible for passwordless login using ssh. Also, I’ve change the context for the /var/lib/one/datastore directory to avoid a Permission Denied error (/var/lib/one/datastores/0/0/disk.0: Permission denied) when trying to run a VM with KVM.

chcon -v --type=ssh_home_t /var/lib/one/.ssh/authorized_keys
chcon -R --type=virt_image_t /var/lib/one/datastores

Create the /var/tmp/one directory and change the ownership

# mkdir /var/tmp/one
# chown oneadmin:oneadmin /var/tmp/one

If you’re using server names, you have to be sure that there’s an entry in your DNS or /etc/hosts for the server name, e.g I have an entry in the /etc/hosts for my server haddock.macto.local

Now as the oneadmin user, let’s create the host with a KVM hypervisor and openvswitch.and check that no errors are shown. Also try to open a ssh to check that no password is used, this will insert your host in the known_hosts file and will prevent ” Host key verification failed” error when monitoring your host.

# su oneadmin
$ ssh oneadmin@haddock.macto.local
The authenticity of host 'haddock.macto.local(' can't be established.
RSA key fingerprint is ....
Are you sure you want to continue connecting (yes/no)? yes
$ exit

$ onehost create haddock.macto.local -i kvm -v kvm -n ovswitch
ID: 0
$ onehost list
0 haddock.macto.l - 0 0 / 200 (0%) 0K / 5.6G (0%) on

OK status is on, and my host looks good in Sunstone GUI. Perfect.

If “err” is shown after executing the onehost list command, check /var/lib/one/oned.log for errors. I was having the following error because I haven’t installed opennebula-node-kvm after libvirt installation: “error: authentication failed: Authorization requires authentication but no agent is available”

If you’re running openvswitch you can avoid the following errors editing the sudoers file.

  • sudo: sorry you must have a tty to run sudo
  • sudo: Error deploying virtual machine: sudo: no tty present and no askpass program specified

Edit the sudoers file with visudo and comment the line “Defaults requiretty” , then add the following lines at the end of the file:

oneadmin ALL = NOPASSWD: /sbin/iptables
oneadmin ALL = NOPASSWD: /sbin/ebtables
oneadmin ALL = NOPASSWD: /usr/bin/ovs-vsctl
oneadmin ALL = NOPASSWD: /usr/bin/ovs-ofctl

I’ve also found sometimes this error:

WARNING **: Error connecting to bus: org.freedesktop.DBus.Error.FileNotFound: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory

I decided to reboot the machine and the monitor status changed to on.

Tomorrow I’ll explain how to run a virtual machine in OpenNebula.

CentOS 6.4 – OpenNebula 4 Eagle installation

Info 13/05/2013: this post has been updated as OpenNebula 4 has been published last week. A big thanks to C12G Labs and other contributors for such a great job.

Info 03/06/2013: this post has been updated as OpenNebula 4.0.1 has been recently published.

I’ve downloaded the CentOS-6.4-opennebula-4.0.1-1.tar.gz from OpenNebula’s download page. It’s really easy to install OpenNebula using the rpm packages provided by C12G Labs.

Warning: Don’t install the opennebula-context-4.0.1-1.x86_64.rpm package inside the context directory as it will reconfigure your network interfaces, that package should be used if you want to install contextualization scripts in RedHat or CentOS virtual machines.

Warning: be sure to use the EPEL repository to solve ruby dependencies. Read the step 2 of this old article.

tar xvfz CentOS-6.4-opennebula-4.0.1-1.tar.gz
cd opennebula-4.0.1-1

yum localinstall opennebula-common-4.0.1-1.x86_64.rpm
yum localinstall opennebula-ruby-4.0.1-1.x86_64.rpm
yum localinstall opennebula-4.0.1-1.x86_64.rpm
yum localinstall opennebula-sunstone-4.0.1-1.x86_64.rpm
yum localinstall opennebula-server-4.0.1-1.x86_64.rpm

Note: VNC Service (novnc) is now installed with opennebula-sunstone package, in previous versions you had to install it with a script called

Ok. Let’s start the opennebula and opennebula-sunstone services.

cd /usr/share/one
# service opennebula start
Starting OpenNebula daemon: [ OK ]

#service opennebula-sunstone start
Starting Sunstone Server daemon: VNC proxy started
sunstone-server started [ OK ]

If you want Sunstone to listen in a different IP address than edit the :host: directive in the /etc/one/sunstone-server.conf and add an iptables rule if your firewall is running (also remember to save that rule).

iptables -I INPUT -p tcp --dport 9869 -m state --state=NEW,ESTABLISHED,RELATED -j ACCEPT

# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Try to open in a browser the Sunstone GUI: http://x.x.x.x:9869 (where x.x.x.x is the ip address configured in /etc/one/sunstone-server.conf) and remember that the oneadmin password is the random string inside the /var/lib/one/.one/one_auth file.

The new Sunstone interface looks awesome! In the next days I’ll try the new interface creating a new host.


Read this post if you want to configure a OpenNebula system with KVM and Openvswitch.


Openvswitch – Setting a bandwidth limit

If you read the manual page for ovs-vsctl command you’ll find the Configuration Cookbook section. Using the QoS example, I’ve tried to set a bandwidth limit to one of my virtual machine’s network interface (vnet2).

The QoS cam be enforced using Linux HTB (Linux Hierarchical Token Bucket please read) or Linux HSFC (Linux Hierarchical Fair Service Curve please read)  and I’ve called the configuration newqos following the example. In that configuration I’ve set a maximum rate of 4 Mbps (4000000 bits per second) and  one QoS queue called q0 with a maximum and minimum bandwidth rate of 4 Mbps. The newqos configuration is applied to the vnet2 openvswitch port which is the eth0 network interface for my virtual machine:

ovs-vsctl -- set Port vnet2 qos=@newqos -- \
--id=@newqos create QoS type=linux-htb other-config:max-rate=4000000 queues=0=@q0 -- \
--id=@q0   create   Queue   other-config:min-rate=4000000 other-config:max-rate=4000000

Before setting the QoS I started to download a 2 GB file. You can notice in the following rrdtool graph that after applying the QoS configuration, the rate is reduced to 4 Mbps so Openvswitch is working just fine. After removing the QoS configuration, the bandwidth starts to rise again. Openvswitch is awesome!.


If you want to remove the qos, do what the man page says:

ovs-vsctl clear Port vnet2 qos