from now on I’ll publish new posts about OpenNebula in the OpenNebula’s blog.
Here I link the following post about securing NoVNC connections. Hope you find it useful and I’d like to send a big thank you to the OpenNebula team
from now on I’ll publish new posts about OpenNebula in the OpenNebula’s blog.
Here I link the following post about securing NoVNC connections. Hope you find it useful and I’d like to send a big thank you to the OpenNebula team
OpenNebula 4.4 Retina is out as you can read in the official blog Today I’m going to upgrade my OpenNebula 4.2 using the official upgrading guide.
Warning: I’ve followed the guide and here I present you my notes, and they are just notes that I want to share, maybe I’m not doing things in the proper way so please stick to the official guide.
In my installation I’m using the default sqlite database and the only virtual machine that I’ve created is not in a transient state as requested. My virtual machines is in the STOPPED state.
#service opennebula-sunstone stop #service opennebula stop
#cp -r /etc/one/ /tmp/one
3. I’m using the official OpenNebula repository for CentOS so I update the packages using yum:
#yum update opennebula opennebula-common opennebula-node-kvm opennebula-ruby opennebula-server opennebula-sunstone ..... Dependencies Resolved ===============================================================================================Package Arch Version Repository Size =========================================================== Updating: opennebula x86_64 4.4.0-1 opennebula 58 k opennebula-common x86_64 4.4.0-1 opennebula 7.0 k opennebula-node-kvm x86_64 4.4.0-1 opennebula 7.2 k opennebula-ruby x86_64 4.4.0-1 opennebula 54 k opennebula-server x86_64 4.4.0-1 opennebula 1.1 M opennebula-sunstone x86_64 4.4.0-1 opennebula 1.1 M Transaction Summary =============================================================================================== Upgrade 6 Package(s) Total download size: 2.4 M Is this ok [y/N]: y
In my case I’ve also updated some packages in a remote host
# yum update opennebula opennebula-common opennebula-node-kvm opennebula-ruby
The configuration files are backed up while upgrading as the guide recommends to use the new configuration files, notice that maybe you have to change the new configuration files.
warning: /etc/one/oned.conf saved as /etc/one/oned.conf.rpmsave warning: /etc/one/sunstone-server.conf saved as /etc/one/sunstone-server.conf.rpmsave
4. I’m using the default sqlite database so in order to upgrade. I create a new directory /var/lib/one/backup where I’ll store the sqlite backup created by the migration scripts.
su - oneadmin onedb upgrade -v --sqlite /var/lib/one/one.db Version read:4.2.0 : OpenNebula 4.2.0 daemon bootstrap Sqlite database backup stored in /var/lib/one/one.db.bck Use 'onedb restore' or copy the file back to restore the DB. > Running migrator /usr/lib/one/ruby/onedb/4.2.0_to_4.3.80.rb > Done > Running migrator /usr/lib/one/ruby/onedb/4.3.80_to_4.3.85.rb > Done > Running migrator /usr/lib/one/ruby/onedb/4.3.85_to_4.3.90.rb > Done > Running migrator /usr/lib/one/ruby/onedb/4.3.90_to_4.4.0.rb > Done Database migrated from 4.2.0 to 4.4.0 (OpenNebula 4.4.0) by onedb command. mkdir /var/lib/one/backups mv /var/lib/one/one.db.bck /var/lib/one/backups/
5. Now as root I try to start the opennebula and opennebula-sunstone services
# service opennebula start Starting OpenNebula daemon: [ OK ] # service opennebula-sunstone start Starting Sunstone Server daemon: VNC proxy started sunstone-server started [ OK ]
6. Now the guide says that you must upgrade the drivers in the hosts using sync. I check that the hosts are ok
$ onehost sync * Adding deckard.artemit to upgrade * Adding gaff.artemit to upgrade [========================================] 2/2 gaff.artemit All hosts updated successfully. $ onehost list ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT 0 deckard.artemit - 0 0 / 200 (0%) 0K / 5.6G (0%) on 1 gaff.artemit - 0 0 / 200 (0%) 0K / 5.7G (0%) on
7. Remember the configuration files have been changed. In my case I’ve edited the oned.conf and sunstone-server.conf file so Sunstone listens on a different address and my LDAP server is used for authentication.
8. I have no cluster configured for now so the new multi system datastore is not affecting me. I just start my stopped VM and it works again in the new OpenNebula 4.4
I wish to thank the OpenNebula team for such a great documentation.
In a previous post I’ve installed an LDAP server with OpenLDAP in my CentOS 6.4, please read it if you want to know the structure of my lab’s LDAP directory. I’m going to configure OpenNebula so it uses this LDAP for Sunstone authentication.
I’m following the official documentation about this topic and I’m offering my examples and comments, please read http://opennebula.org/documentation:rel4.2:ldap if you have any doubt.
The first thing to do is installing the following gem:
# gem install net-ldap
If the gem is not installed this is the error you’ll find in /var/log/one/oned.log: Error `gem_original_require’: no such file to load — net/ldap (LoadError)
We’ll need to configure the LDAP connection parameters in the /etc/one/auth/ldap_auth.conf. My LDAP running in the same host only requires the following parameters. I’ll use the uid attribute for the user field and the users must me members of the group onemanagers:
# Ldap authentication method :auth_method: :simple # Ldap server :host: localhost :port: 389 # base hierarchy where to search for users and groups :base: 'dc=example,dc=com' # group the users need to belong to. If not set any user will do :group: 'cn=onemanagers,ou=Groups,dc=example,dc=com' # field that holds the user name, if not set 'cn' will be used :user_field: 'uid' # field name for group membership, by default it is 'member' :group_field: 'member' # user field that that is in in the group group_field, if not set 'dn' will be used :user_group_field: 'dn'
Now edit the /etc/one/oned.conf file and add default in the authn directive inside the AUTH_MAD section
AUTH_MAD = [ executable = "one_auth_mad", authn = "ssh,x509,ldap,default,server_cipher,server_x509" ]
If you forget to add “default” this is the error you’ll find in your /var/log/one/oned.log: Error Auth Error: Authentication driver ‘default’ not available
If you want to authenticate users that are not included in opennebula database but in the LDAP directory execute this command so LDAP authentication driver is used as the default authentication driver. I execute the following commands:
# cp -R /var/lib/one/remotes/auth/ldap /var/lib/one/remotes/auth/default chown oneadmin:oneadmin /var/lib/one/remotes/auth/default ( -- remember to do this if you are using the root account --)
Finally let’s change how Sunstone authenticates users:
Restart your services (maybe it’s not needed but just in case… 😀 )
Ok. The authentication with LDAP works. If I try to log with a new user which is a member of the cn=onemanagers group the user is added to OpenNebula automatically. Here’s an image of how my user n40lab has been added as a new Sunstone user.
This is my first post in a long time…. my apologies I’ve been quite busy for a few months and I’ve had no time left to write new posts or answer your comments, so thanks for your patience and understanding for all of you that have sent me emails or comments.
Today I’m writing an easy post… maybe it’s too late as OpenNebula 4.4 is so close but if you’re looking for a post about OpenNebula 4.2 and CentOS 6.4 it could help you.
OpenNebula provides an official quickstart guide for CentOS and other platforms so you may want to check them first, I keep writing these posts as they are my installation notes and maybe they are useful to you.
I’m executing the following commands as root.
1. Install the EPEL repository
rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
2. Add OpenNebula’s repository – [ Reference OpenNebula’s site ]
# cat << EOT > /etc/yum.repos.d/opennebula.repo [opennebula] name=opennebula baseurl=http://downloads.opennebula.org/repo/CentOS/6/stable/\$basearch enabled=1 gpgcheck=0 EOT
3. Let’s check that EPEL and OpenNebula repositories are ready
# yum search opennebula Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos.mirror.xtratelecom.es * epel: fedora.aau.at * extras: centos.mirror.xtratelecom.es * updates: centos.mirror.xtratelecom.es =========================== N/S Matched: opennebula ============================ opennebula-common.x86_64 : Provides the OpenNebula user opennebula-context.x86_64 : Configures a Virtual Machine for OpenNebula opennebula-flow.x86_64 : Manage OpenNebula Services opennebula-gate.x86_64 : Transfer information from Virtual Machines to: OpenNebula opennebula-java.x86_64 : Java interface to OpenNebula Cloud API opennebula-node-kvm.x86_64 : Configures an OpenNebula node providing kvm opennebula-ruby.x86_64 : Provides the OpenNebula Ruby libraries opennebula-server.x86_64 : Provides the OpenNebula servers opennebula.x86_64 : Cloud computing solution for Data Center Virtualization opennebula-ozones.x86_64 : Tool for administering opennebula-sunstone.x86_64 : Browser based UI and public cloud interfaces. Name and summary matches only, use "search all" for everything.
4. Install the packages you need for your OpenNebula installation architecture. In my case I’m running OpenNebula in a single machine so I’ll install opennebula-server and opennebula-sunstone
# yum install opennebula-server opennebula-sunstone
Warning: if it’s the first time using the EPEL repository you’ll need to import its GPG key so answer yes to the following question:
# Is this ok [y/N]: y
5. If you are going to use KVM virtualization, install the package opennebula-node-kvm in the machine that’s going to act as the host offering virtualization resources. This package will install qemu-kvm, libvirt and all the CentOS packages needed for virtualization. I’m using a single machine so my machine will act as a front-end, host and datastore. Please read the official documentation to understand which are the basic components for OpenNebula.
6. Let’s start the opennebula service
# service opennebula start Starting OpenNebula daemon: [ OK ]
7. The opennebula’s sunstone service provides the graphical interface for opennebula. By default it listens on the 127.0.0.1:9869 address/port so if you want to listen in a different address edit the :host: directive in the /etc/one/sunstone-server.conf
For example if you want to listen in the 192.168.1.70 address change the host directive and save the file.
# Server Configuration # :host: 192.168.1.70
The service is started using the following command:
# service opennebula-sunstone start Starting Sunstone Server daemon: VNC proxy started sunstone-server started [ OK ]
If you change the IP address where Sunstone is listening, remember to add a firewall rule (also remember to save that rule)
#iptables -I INPUT -m tcp -p tcp --dport 9869 -m state --state=NEW -j ACCEPT #service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]
OK, that’s only the installation part, if you want to run a VM you’ll need to configure a host please read this old blog post
Cheers! … I’ll be back!
I’m a CentOS guy but a long time ago I started my Linux career with Debian. I’ve been asked to try to install OpenNebula 4.0.1 in Debian Wheezy so here is a post about what I’ve done.
Remember: I only want to try to help people, but I’m no Debian expert so I’m not responsible for direct or indirect damage caused by the use of the information on this site.
In general OpenNebula official packages works fine for Debian Wheezy, but there’s one package (opennebula_4.0.1-1_amd64.deb) that we’ll need to modify to avoid problems with dependencies. I’m going to execute all the commands as root and using a clean Debian minimal installation (only OpenSSH and utilies installed)
The first thing we’re going to do is to download and extract the OpenNebula source files from the official downloads page.
tar xvfz opennebula-4.0.1.tar.gz cd opennebula-4.0.1
We’ll need to compile the opennebula source files using the instructions found in the README.md file but first we’ll need a few packages.
aptitude -y install g++ ruby ruby-sqlite3 openssl libxmlrpc-core-c3-dev libsqlite3-dev libxmlrpc-c++4-dev scons flex bison libxml2-dev libssl-dev rake rubygems ruby-dev libmysqld-dev ruby-xmlparser libxslt1-dev libcurl4-openssl-dev
Now we are going to compile opennebula with mysql support. After the compilation we’re going to install the opennebula files in a temp directory called one_build:
scons mysql=yes mkdir ../one_build/ ./install.sh -d ../one_build/
Ok we’re done. Now let’s download the opennebula 4.0.1 packages for debian 6.0.7 from the opennebula.org download site and move them to a directory. Uncompress the files.
tar xvfz Debian-6.0.7-opennebula-4.0.1-1.tar.gz cd opennebula-4.0.1-1/
I’m a Debian newbie so I’ve decided to modify the existing deb package instead of building my own from scratch. I’ve used this useful forum post. We’ll use a temp directory called buildeb.
mkdir buildeb dpkg-deb -x opennebula_4.0.1-1_amd64.deb buildeb/ dpkg-deb --control opennebula_4.0.1-1_amd64.deb mv DEBIAN buildeb/ cd buildeb
Ok let’s modify the deb package.
Step 1, edit the DEBIAN/control file.
Change these dependencies:
Add this dependency after libxmlrpc-core-c3 (the comma is to separate dependencies :-D):
Step 2, we are going to substitute some binaries from the .deb with those that we’ve just compiled so the right libraries are used.
# cp ../../one_build/bin/tty_expect usr/bin/ # cp ../../one_build/bin/oned usr/bin/ # cp ../../one_build/bin/one usr/bin/ # cp ../../one_build/bin/mm_sched usr/bin/ # cp ../../one_build/bin/onedb usr/bin/
OK now we’re ready to build our opennebula debian package for wheezy:
cd .. # dpkg -b buildeb opennebula_4.0.1-1_amd64_wheezy.deb
OK. All packages are ready but before installing them we are going to insall gdebi. It’ll help us to install the local deb files solving dependencies.
aptitude -y install gdebi
Come on, let’s install!
# gdebi opennebula-common_4.0.1-1_all.deb # gdebi ruby-opennebula_4.0.1-1_all.deb # gdebi opennebula-tools_4.0.1-1_all.deb # gdebi opennebula_4.0.1-1_amd64_wheezy.deb # gdebi opennebula-sunstone_4.0.1-1_all.deb
Now we’ll check if opennebula and sunstone are running:
service opennebula status [ ok ] one is running.grep netstat -ntap | grep 9869 tcp 0 0 127.0.0.1:9869 0.0.0.0:* LISTEN 22034/ruby
And finally let’s switch to user oneadmin and run a few commands:
$ onehost list ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT oneadmin@lestrade:/home/mcabrerizo/opennebula-4.0.1-1$ onevm list ID USER GROUP NAME STAT UCPU UMEM HOST TIME
Ok. The installation seems fine! but I’ll check in the next days if I missed something important. If all is good I’ll post here an URL to download the deb package I’ve bulit so you can save time (or write me an email if you can’t wait! look for my contact information)
Please I appreciate your feedbacks it helps me to keep my blog useful for the community.
today my colleague Geoff told me that he was trying to resize an OpenNebula image. As I had never done that before, I started to review OpenNebula documentation and I found this email in the OpenNebula mail list (please suscribe!, it’s really useful!). It seems that resizing is not yet supported (maybe I’m wrong and there’s another solution!) but thanks to this issue information I found virt-resize.
This is virt-resize description: a tool which can resize a virtual machine disk, making it larger or smaller overall, and resizing or deleting any partitions contained within. Looks promising! The virt-resize information page is full of examples so it has been easy to start using it.
Ok this is what I’ve tested and please remember proceed with caution I’m no responsible of any damage caused by following this steps and try to read the documentation first I’ve just used that info :-D.
[root@haddock ~]# cd /var/lib/one/datastores/1/ [root@haddock 1]# virt-filesystems --long --parts --blkdevs -h -a 1a8a07d0382566a89afd96a134eb04cf Name Type MBR Size Parent /dev/sda1 partition 83 500M /dev/sda /dev/sda2 partition 8e 20G /dev/sda /dev/sda device - 20G -
truncate -s 30G newdisk
virt-resize tmp newdisk --expand /dev/sda2 --LV-expand /dev/mapper/vg_moriarty-lv_root
mv newdisk 1a8a07d0382566a89afd96a134eb04cf chown oneadmin:oneadmin 1a8a07d0382566a89afd96a134eb04cf
[root@haddock 1]# virt-filesystems --long --parts --blkdevs -h -a 1a8a07d0382566a89afd96a134eb04cf Name Type MBR Size Parent /dev/sda1 partition 83 500M /dev/sda /dev/sda2 partition 8e 30G /dev/sda /dev/sda device - 30G -
Please if you know a better way or OpenNebula has already a way to modify the size please let me know, I want this post to be useful for the community.
P.S: Thanks Geoff!
I’d like to evaluate Windows Server 2013 so I’ve decide to create a VM in my OpenNebula 4 lab. I’ve read in this page that I’d need VirtIO signed drivers for Windows in order to detect the virtual hard disk which will store the OS.
I’ve created a template with the following storage (DISKS):
After instantiating the template, Windows starts the installation. When Windows warns you that no disk is found you can load the driver from the second CDROM.
Then you have to browse the CDROM and select the SCSI RedHat VirtIO SCSI controller (WLH/AMD64 folder). Now the Virtual Hard Disk has been detected and you can install the Operating System.
OpenNebula has released a maintenance release for OpenNebula 4 Eagle. If you are running like me OpenNebula 4.0.0 version these are the steps I’ve followed to upgrade (I’m using the SQLite configuration database):
Ok, as time goes by… I want to learn about advanced topics from OpenNebula. I’m going to start working with Opennebula Zones (ozones) which will allow me to create a Virtual Data Center. As I have only one machine in my lab (I accept hardware donations to increase my lab potential 🙂 ) I will have only one zone to play but that’s better than nothing.
If you are interested in OpenNebula Zones and Virtual Data Centers please read:
Disclaimer: the following configuration steps will help you to to run the ozones-server in a development environment, if you want to use ozones-server in a production environment please check first how to protect your Apache server conveniently (e.g disable unneeded modules)
Ok. These are the steps I’ve followed:
// Download the OpenNebula rpm packages #yum localinstall opennebula-common-4.0.0-1.x86_64.rpm #yum localinstall opennebula-ruby-4.0.0-1.x86_64.rpm #yum localinstall opennebula-ozones-4.0.0-1.x86_64.rpm // Install the Apache package #yum install httpd // Configure the service so it's started at boot #chkconfig httpd on // Let's add an iptables rules so http traffic is allowed #iptables -I INPUT -m tcp -p tcp --dport 80 -m state --state=NEW,ESTABLISHED,RELATED -j ACCEPT #service iptables save // Edit your /etc/httpd/conf/httpd.conf and change some default parameters like your // ServerName, ServerSignature Off... The rewrite and the http proxy modules are enabled // by default // Let's create a configuration file e.g /etc/httpd/conf.d/ozones.conf where // reverse proxy directives are configured. Add this lines to the file: ProxyPass /ozones/ http://localhost:6121/ ProxyPassReverse /ozones/ http://localhost:6121/ ProxyRequests Off // Start your Apache server #service httpd start // If you have SELinux enabled we must allow Apache to start network connections: #setsebool -P httpd_can_network_connect 1 // Now let's prepare some things to start the ozones server // Add a user:password line into a file e.g ozonesadmin:ozonepassword and set permissions for // oneadmin user. #echo ozonesadmin:ozonepassword > /var/lib/one/.one/ozones_auth #chown oneadmin:oneadmin /var/lib/one/.one/ozones_auth // OK!!! NOW USE THE ONEADMIN ACCOUNT #su oneadmin // The first time you start the ozones-server you must set at leat the OZONES_AUTH env variable // so the database is created with the right credentials. I'm using the default sqlite database // If you want to change the port and ip address for the server or the database server edit the // /etc/one/ozones-server.conf file $export OZONES_AUTH=/var/lib/one/.one/ozones_auth $export OZONES_URL="http://localhost:6121" // Let's start and check if the ozones server is listening $ /usr/bin/ozones-server start $ netstat -ntap | grep 6121 tcp 0 0 127.0.0.1:6121 0.0.0.0:* LISTEN 20203/ruby
Great. If the apache proxy module works fine and the ozones-server is running we will be able to log into the ozones gui.
Finally some screenshots:
In a few days, I’ll play with the zones configuration.
today I’m offering you a simple contextualization example in OpenNebula using C12G Labs scripts. Contextualization is explained in the official documentation so please read it.
In my example I have installed a CentOS 6.4 virtual machine using a netinstall ISO image, then I’ve downloaded from the OpenNebula download page the current rpm files for OpenNebula. After uncompressing the tar.gz file I have installed the opennebula-context rpm package which provides the contextualization scripts.
tar xvfz CentOS-6.4-opennebula-4.0.0-1.tar.gz cd opennebula-4.0.0-1/context/ yum localinstall opennebula-context-4.0.0-1.x86_64.rpm
The rpm package will create:
OK. I’ve prepared a template with two network interfaces using Sunstone’s wizard. In the Context section I’ve checked the “Add SSH contextualization” and I’ve copied my public rsa key. This ssh contextualization will allow me to log in the new virtual machine as root with my ssh key.
The “Add Network contextualization” is also checked and it’ll create the network scripts for the NICs using the last four octets of the NIC’s MAC address to set a network address.
And now in the Custom variables section I’ve added two variables that I deliberately forgot to configure when defining my virtual networks, a variable to set a default gateway for one of the NICs and the DNS server for the virtual machine.
Once I finish the template I instantiate it to start a new virtual machine. When the VM has booted I check the configuration for NICs, default gateway, dns and ssh and et voilà the contextualization scripts have already configured all automagically, how nice!
There are two 169.254.0.0 lines in the netstat -rn, probably because of the default dhcp configuration for NICs, nothing related with the contextualization scripts.
I hope this helps you to understand why contextualization is a nice feature. Enjoy!