CentOS 7. Did you know…?

This post is just a reminder of some things that you may have not noticed when working with CentOS 7. I’ll be updating it from time to time.

  • Remember those times when you had to use nohup with a command so it could run on the background even if you closed the shell from where it was launched? That’s no longer needed with CentOS 7, if you have a background job and you close that shell, the process will run! No more nohup needed.
  • You can use yum to install locally rpm files you’ve downloaded so dependencies are installed automatically. We used to use yum localinstall but now you can use yum install right away.

OPENVSWITCH LTS IN CENTOS 6

As some visitors have asked me about installing Open vSwitch on CentOS 6, I’m writing the following post after my first about it almos three years ago. If you find a better way, please let me know so I update the post and remove useless info from the Internet 😉

I’ve found this repository by Alexander Evseev so you may try to use the openvswitch packages (you even have the kmod package) found there. Have a look: http://download.opensuse.org/repositories/home:/aevseev/CentOS6/x86_64/

In any case… I’ll show you what you can do to generate your own RPM packages the old way (no python api supported as it requires Python 2.7 while CentOS 6 uses Python 2.6):

Current LTS version: 2.5.0
Tested on: CentOS 6.8

Let’s start installing some packages:

yum -y install wget openssl-devel gcc make python-devel openssl-devel kernel-devel graphviz kernel-debug-devel autoconf automake rpm-build redhat-rpm-config libtool python-twisted-core python-zope-interface PyQt4 desktop-file-utils libcap-ng-devel groff checkpolicy selinux-policy-devel

Let’s add a new user and switch to that user:

adduser ovs; su - ovs

Let’s prepare the build environment and download the source code:

mkdir -p ~/rpmbuild/SOURCES
wget http://openvswitch.org/releases/openvswitch-2.5.0.tar.gz
cp openvswitch-2.5.0.tar.gz ~/rpmbuild/SOURCES/
tar xfz openvswitch-2.5.0.tar.gz

Now go to the openvswitch directory

cd openvswitch-2.5.0

Let’s modify some lines in the old rhel6 spec file provided by Nicira (copy and paste):

sed -i "s/Requires: logrotate, python >= 2.7/Requires: logrotate/" rhel/openvswitch.spec
sed -i "/$RPM_BUILD_ROOT\/usr\/bin\/ovs-test/d" rhel/openvswitch.spec
sed -i "/$RPM_BUILD_ROOT\/usr\/bin\/ovs-l3ping/d" rhel/openvswitch.spec
sed -i "/\/usr\/bin\/ovs-parse-backtrace/d" rhel/openvswitch.spec
sed -i "/\/usr\/bin\/ovs-pcap/d" rhel/openvswitch.spec
sed -i "/\/usr\/bin\/ovs-tcpundump/d" rhel/openvswitch.spec
sed -i "/\/usr\/bin\/ovs-vlan-test/d" rhel/openvswitch.spec
sed -i "/\/usr\/share\/man\/man8\/ovs-bugtool.8.gz/d" rhel/openvswitch.spec
sed -i "/\/usr\/share\/openvswitch\/bugtool-plugins/d" rhel/openvswitch.spec
sed -i "/\/usr\/share\/openvswitch\/scripts\/ovs-bugtool-*/d" rhel/openvswitch.spec
sed -i "/\/usr\/share\/openvswitch\/python/d" rhel/openvswitch.spec
sed -i "/\/usr\/share\/openvswitch\/scripts\/ovs-bugtool-*/d" rhel/openvswitch.spec
sed -i "/\/usr\/bin\/ovs-dpctl-top/d" rhel/openvswitch.spec
sed -i "/\/usr\/sbin\/ovs-bugtool/d" rhel/openvswitch.spec
echo "/usr/bin/ovs-testcontroller" >> rhel/openvswitch.spec

Finally let’s build the RPM packages… and have a cup of coffee as tests are being run! At least you can tell if it works… 😛

rpmbuild -bb rhel/openvswitch.spec

Once the build is finished, type exit.

exit

CentOS 6 already provides an openvswitch kernel module, so we’ve only compiled the binary tools.

[root@localhost ~]# modinfo openvswitch
filename: /lib/modules/2.6.32-642.3.1.el6.x86_64/kernel/net/openvswitch/openvswitch.ko
license: GPL
description: Open vSwitch switching datapath
srcversion: 00938868C288DBF055E30F3
depends: libcrc32c,vxlan
vermagic: 2.6.32-642.3.1.el6.x86_64 SMP mod_unload modversions

As root, we’ll install the RPM package.

 yum localinstall /home/ovs/rpmbuild/RPMS/x86_64/openvswitch-2.5.0-1.x86_64.rpm -y

Finally, start the openvswitch service and check that it’s running

service openvswitch start
...output...
/etc/openvswitch/conf.db does not exist ... (warning).
Creating empty database /etc/openvswitch/conf.db [ OK ]
Starting ovsdb-server [ OK ]
Configuring Open vSwitch system IDs [ OK ]
Inserting openvswitch module [ OK ]
Starting ovs-vswitchd [ OK ]
Enabling remote OVSDB managers [ OK ]

service openvswitch status
...output...
ovsdb-server is running with pid 3404
ovs-vswitchd is running with pid 3416

If you want the openvswitch service to start at boot time:

chkconfig openvswitch on

Let’s check that the command-line tools are ready:

ovs-vsctl -V
...output...
ovs-vsctl (Open vSwitch) 2.5.0
Compiled Aug 31 2016 19:54:41
DB Schema 7.12.1

Done. I can’t be sure if it will work for you as I haven’t been using Open vSwitch with CentOS 6 for a long time… so any feedback is welcomed!

Cheers!

Installing latest RabbitMQ on CentOS 7

This post is a quick reminder for the future that may help you too.

If you want to install the latest RabbitMQ package for your CentOS 7 you can do it in only three steps:

sudo yum install epel-release -y
sudo curl -s https://packagecloud.io/install/repositories/rabbitmq/rabbitmq-server/script.rpm.sh | sudo bash
sudo yum install rabbitmq-server -y

Then, as always, you can start it and enable the service:

sudo systemctl enable rabbitmq-server
sudo systemctl start rabbitmq-server

Check that the service is running either with:

sudo systemctl is-active rabbitmq-server

or:

sudo systemctl status rabbitmq-server

If serving to remote hosts, a firewalld rule may be useful:

firewall-cmd --add-port=5672/tcp --zone=public --permanent
firewall-cmd --reload

See ya!

Reference: https://www.rabbitmq.com/install-rpm.html

 

Installing CoreOS etcd server on CentOS 7

While I’m preparing a shell script or test some Ansible roles available at Ansible Galaxy so the installation is automatic, here I show you the steps I followed to install by hand the Etcd server on CentOS 7 as quick and fast as possible.

First of all we have to create some directories (/var/lib/etcd and /etc/etcd) and add the etcd user and group

mkdir /var/lib/etcd;mkdir /etc/etcd; groupadd -r etcd; useradd -r -g etcd -d /var/lib/etcd -s /sbin/nologin -c "etcd user" etcd;chown -R etcd:etcd /var/lib/etcd

Now we have to add a systemd service definition for our etcd service

cat << EOT > /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd service
After=network.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
ExecStart=/usr/bin/etcd
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOT

Warning: The etcd service needs a configuration file, we install a really simple one that should be modified according to your needs, e.g add urls with your server’s IP address or DNS names so your server is not only useful for localhost and secure client requests. Read https://github.com/coreos/etcd for more info.

cat &lt;&lt; EOT &gt; /etc/etcd/etcd.conf
 # [member]
 ETCD_NAME=default
 ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
 #ETCD_SNAPSHOT_COUNTER="10000"
 #ETCD_HEARTBEAT_INTERVAL="100"
 #ETCD_ELECTION_TIMEOUT="1000"
 #ETCD_LISTEN_PEER_URLS="http://localhost:2380,http://localhost:7001"
 ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"
 ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
 #ETCD_MAX_SNAPSHOTS="5"
 #ETCD_MAX_WALS="5"
 #ETCD_CORS=""
 #
 #[cluster]
 #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380,http://localhost:7001"
 # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
 #ETCD_INITIAL_CLUSTER="default=http://localhost:2380,default=http://localhost:7001"
 #ETCD_INITIAL_CLUSTER_STATE="new"
 #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
 #ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://localhost:4001"
 #ETCD_DISCOVERY=""
 #ETCD_DISCOVERY_SRV=""
 #ETCD_DISCOVERY_FALLBACK="proxy"
 #ETCD_DISCOVERY_PROXY=""
 #
 #[proxy]
 #ETCD_PROXY="off"
 #
 #[security]
 #ETCD_CA_FILE=""
 #ETCD_CERT_FILE=""
 #ETCD_KEY_FILE=""
 #ETCD_PEER_CA_FILE=""
 #ETCD_PEER_CERT_FILE=""
 #ETCD_PEER_KEY_FILE=""
 EOT

Time to download and install etcd binaries for Linux x86_64, the following commands should be good for any Linux distro. It downloads the latest stable version available, creates a directory for any downloaded version and changes the symbolinc link accordingly. It runs etcd with the version argument to check that the binary works fine.

ETCD_VERSION=`curl -s -L https://github.com/coreos/etcd/releases/latest | grep linux-amd64\.tar\.gz | grep href | cut -f 6 -d '/' | sort -u`; ETCD_DIR=/opt/etcd-$ETCD_VERSION; mkdir $ETCD_DIR;curl -L https://github.com/coreos/etcd/releases/download/$ETCD_VERSION/etcd-$ETCD_VERSION-linux-amd64.tar.gz | tar xz --strip-components=1 -C $ETCD_DIR; ln -sf $ETCD_DIR/etcd /usr/bin/etcd && ln -sf $ETCD_DIR/etcdctl /usr/bin/etcdctl; etcd --version

We can enable and start the etcd server with:

systemctl enable etcd; systemctl start etcd

Check etcd service status

systemctl status etcd

● etcd.service – etcd service
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since lun 2016-08-01 10:05:51 UTC; 2s ago
Main PID: 31051 (etcd)
CGroup: /system.slice/etcd.service
└─31051 /usr/bin/etcd

ago 01 10:05:51 localhost.localdomain etcd[31051]: ready to serve client requests
ago 01 10:05:51 localhost.localdomain etcd[31051]: serving insecure client requests on localhost:2379, this is strongly discouraged!
ago 01 10:05:51 localhost.localdomain systemd[1]: Started etcd service.

As you may notice there’s a warning about “serving insecure client requests on localhost:2379, this is strongly discouraged!” once again please change the configuration for your needs and set it safely.

I’ll try update this post so you may follow this blog.

Cheers!

Project Atomic – Installing VM with Vagrant, Libvirt and get more space for the /var/lib/docker directory

I’m playing with Project Atomic. I use Vagrant in my Fedora 23 desktop as helps me to increase my productivity when working with VM :D. As I prefer to use libvirt over VirtualBox as my vagrant provider I install the following packages:

sudo dnf install vagrant-libvirt virt-manager

By default the Atomic Host virtual machine has little space for new containers and images (about 2 GB) so if you don’t remove your containers often you’re not going to have much fun. In this post I’m installing the atomic-host and assign more space to the /var/lib/docker directory which is the place our images, containers and other docker files will be stored.

I create the Vagrantfile for the official atomic-host box:

vagrant init centos/atomic-host

Then I edit the Vagrantfile. I’m adding a QCOW2 file that will act as a virtual disk (I’m using 30G). I use as a reference the vagrant-libvirt documentation. I add the following lines after config.vm.box = “centos/atomic-host”

config.vm.provider :libvirt do |libvirt|
   libvirt.storage :file, :size => ’30G’
end

I start the virtual machine:

vagrant up –provider libvirt

In the vagrant up logs I can see that a new 30 GB disk has been added to the virtual machine.

==> default: — Disks: vdb(qcow2,30G)

==>default:– Disk(vdb): /var/lib/libvirt/images/atomichost_default-vdb.qcow2

Now I open a SSH session:

vagrant ssh

I create a partition for the /dev/vdb disk and change type to LVM so I can add more storage in the future easily. Here are shown only the important parts:

sudo fdisk /dev/vdb


Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): Press Enter
First sector (2048-62914559, default 2048): Press Enter
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-62914559, default 62914559): Press Enter
Using default value 62914559
Partition 1 of type Linux and of size 30 GiB is set
Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition ‘Linux’ to ‘Linux LVM’
Command (m for help): w

The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

Now I’m going to use Logical Volume Management. That way if I need more space in the future I could add a new virtual disk to the logical volume. First I create the physical volume for LVM:

sudo pvcreate /dev/vdb1
Physical volume “/dev/vdb1” successfully created

I create a volume group and add the /dev/vdb1 partition to that volume group:

sudo vgcreate atomic_vg /dev/vdb1

I create a logical volume group and add all the space available in the volume group

sudo lvcreate -l 100%FREE -n atomic_lv atomic_vg

I add a filesystem to the logical volume partition. The logical volume is where I will store all the /var/lib/docker files. I’m using XFS as my filesystem type.

sudo mkfs.xfs /dev/mapper/atomic_vg-atomic_lv

I add an entry to /etc/fstab

sudo sh -c “echo ‘/dev/mapper/atomic_vg-atomic_lv /var/lib/docker xfs defaults 0 0’ >> /etc/fstab”

I stop the docker service so no newer files are copied to the existing /var/lib/docker directory

sudo systemctl stop docker

I mount temporarily the logical volume under /media

sudo mount /dev/mapper/atomic_vg-atomic_lv /media

I copy all the existing files from /var/lib/docker to the logical volume

sudo sh -c “cp -r /var/lib/docker/* /media/”

I umount the logical volume

sudo umount /media

I try to mount the new partition:

sudo mount -a

I check that the new /var/lib/docker is ready

sudo df -kh

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/atomic_vg-atomic_lv 30G 33M 30G 1% /var/lib/docker

There it is 30 GB for my new images and containers!. Finally I start againt the docker engine service:

sudo systemctl start docker

Well that was long, wasn’t it, but at least I’ve more space to play now!

Note: In case you want to add more space using a new qcow2 after you’ve already run vagrant up, according to this issue,  if you’ve already instantiated the VM with vagrant up, if you change the Vagrantfile to add a new disk (e.g libvirt.storage :file, :size => ’30G’) it won’t work after a vagrant reload, no new virtual disk will be added so alternatively you can halt the virtual machine and use virt-manager to add a new disk and follow the fdisk, pvcreate, mount steps…

Service definition to run Cockpit on system startup for CentOS Atomic SIG

I’m working these days with Project Atomic. You should have a look to the awesome Quickstart guide.

I’ve chosen to use Vagrant with the CentOS Atomic SIG so playing with Project Atomic is really easy (change to virtualbox if using that provider :D)

vagrant init centos/atomic-host; vagrant up –provider libvirt

One of the first things I’ve tested is Cockpit’s web server manager. It’s pretty cool and easy to install following the guide.

Once inside the Project Atomic host, Cockpit’s container is intalled with the following command:

vagrant ssh
sudo atomic run cockpit/ws

Remember, I use this blog so I don’t forget my notes. I’m just sharing with you the service definition needed to run Cockpit on system startup when working with CentOS Atomic SIG and not Fedora’s version which is explained in the source for this post. This file must be placed at /etc/systemd/system/cockpitws.service

[Unit]
Description=Cockpit Web Interface
Requires=docker.service
After=docker.service

[Service]
Restart=on-failure
RestartSec=10
ExecStart=/usr/bin/docker run --rm --privileged --pid host -v /:/host --name %p cockpit/ws /container/atomic-run --local-ssh
ExecStop=-/usr/bin/docker stop -t 2 %p

[Install]
WantedBy=multi-user.target

Then just enable and start the service and the Cockpit container will run and be ready to serve at 9090 port (user vagrant/vagrant or root/vagrant).

sudo systemctl daemon-reload
sudo systemctl enable cockpitws.service
sudo systemctl start cockpitws.service

cockpit_inicio

Cool stuff Project Atomic and Cockpit.

 

 

Installing NGINX on CentOS 7

This a quick note on how to install the latest NGINX server on my CentOS 7, using the pakages provided by the NGINX team.  I share this post as it may help any visitor.

The official info about the official NGINX packages is in NGINX’s site

As root you can add the repository file for mainline version:

cat << EOT > /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/mainline/centos/7/\$basearch/
gpgcheck=0
enabled=1
EOT

If you want to use the stable version you’d execute:

cat << EOT > /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/\$basearch/
gpgcheck=0
enabled=1
EOT

Then just use yum:

yum install -y nginx

And manage the service as usual (start the service, enable it at boot time and check the status):

service start nginx

service enable nginx

service status nginx

If you want to check the version you’ve just installed (e.g I’m using the latest mainline version July/2016):

# nginx -v
nginx version: nginx/1.11.2

And that’s all, just a note for my reference for the future, hope it helps you too 🙂