Friday, November 27, 2015

Use your distro's kernel in OBS

The Open Build Service has the nifty feature that you can tell it to use a specific kernel to boot the worker VMs that build your software. To use that, you don't need any special setup, just a package which contains a kernel and an initrd:

   /.build.kernel.kvm # used by KVM workers
   /.build.kernel.xen # used by Xen workers
   /.build.initrd.kvm
   /.build.initrd.xen

So you just need this package and make sure it is installed in the VM using the VMinstall: tag in the project config.
If the build service worker script detects that after preparing the VM, such a kernel and initrd are present, they will be used for booting the worker VM that finally builds your package or image. If it is *not* detected, then the kernel the worker server is running with (usually a SUSE kernel) will also be used for the VM.

In the openSUSE Buildservice instance, all "recent" SUSE distributions are configured for that: they use the kernel-obs-build package, which gets created automatically when building the kernel rpms.

Now I am right now using a buildservice instance for cross-distribution package- and imagebuilds. The challenges of trying to build RHEL/CentOS 7 images with KIWI in OBS warrant at least one additional blog post, but one thing I noticed was, that some of the kiwi stuff, when done with a CentOS 7 userland, apparently also needs a CentOS kernel, otherwise kiwi's parted calls, for example, will exit with code 1 (without issuing an error message, btw).
So I have built a kernel-obs-build from the CentOS 7 kernel and configured my OBS instance to use it, which brought me quite some steps further to building CentOS images with KIWI in OBS.
The code (or rather: the spec files) to "convert" the CentOS kernel to an OBS kernel is at https://github.com/seife/kernel-obs-build, a short README on how  to use it is included.

Note that right now it only works with KVM workers as I was not able to get the worker code to boot the kernel correctly in a Xen VM, even though drivers are all there, the reason is probably that the obs worker scripts rely on some of the specifics of a Xen-specific kernel (e.g. the device name of the block devices being passed through to the VM from the config, which is not true for a generic PV-capable kernel).
But I guess this will improve soon, now that openSUSE has dropped the kernel-xen package, they will face the same issues and hopefully someone will fix them ;)

Sunday, July 12, 2015

Accessing my XFCE desktop with x11vnc

The following is probably old boring stuff for many, but I did not know it and it was astonishingly hard to google for it, so maybe it might be news for others, too.

This week I needed to access the desktop of my machine at home from the office. SSH access and X forwarding were not really sufficient options.
I remembered that a long time ago, KDE already had a "share this desktop" function, which would export the current desktop via VNC and even send an invitation with the credentials via email. As far as I know, GNOME has a similar feature. However, I'm using neither KDE nor GNOME but XFCE, and I could not find such a function. Additionally, I was not at the machine, so interactively setting up something was not really an option.
Finally I came across x11vnc. The short description says it all:
x11vnc allows one to view remotely and interact with real X displays (i.e. a display corresponding to a physical monitor, keyboard, and mouse) with any VNC viewer. In this way it plays the role for Unix/X11 that WinVNC plays for Windows.
This allows exactly what I needed. There is even a neat wrapper "x11vnc_ssh" in the openSUSE package that does the tunneling via SSH and everything else, so that all you need to do is:

  • log in to your target machine via ssh
  • call "x11vnc -storepasswd" (attention: this will store the password in ~/.vnc/passwd)
  • log out and from your "viewer machine" call "x11vnc_ssh username@targethost"
Note that with my default 13.2 setup, x11vnc_ssh does use invalid options for vncviewer, so either update it from the  X11:RemoteDesktop repository, or just remove all options from the vncviewer invocation on line 75 of /usr/bin/x11vnc_ssh, just leaving
if vncviewer :$port $3; then break; fi
That's all you need to do to comfortably access your running desktop!

Now as I initially wrote, this is not really "news", but I still did not know  it before.

Sunday, March 15, 2015

FITRIM/discard with qemu/kvm for thin provisioning

My notebook computer is running with an SSD, and usually I'm creating logical volumes for the KVM VM's I install on it for testing purposes. On my normal file systems, I regularly run "fstrim" manually, to help the SSD firmware figure out which blocks can be reused. However, the LV's of the virtual machines usually stayed un-TRIM'ed. I had heard, that KVM/QEMU now supports the discard commands, but had not yet gotten to testing it.
I finally got to figuring out how it works:

First, you need to switch the VM to using virtio-scsi instead of virtio-blk:

Before:
<disk type='block' device='disk'>
  <driver name='qemu' type='raw'/>
  <source dev='/dev/main/factory'/>
  <target dev='vda' bus='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/
</disk>
After:
<disk type='block' device='disk'>
  <driver name='qemu' type='raw'/>
  <source dev='/dev/main/factory'/>
  <target dev='sda' bus='scsi'/>
  <address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='scsi' index='0' model='virtio-scsi'/>
Note the added scsi controller, and the only things you need to change are "target" and "address", if your source is different, that's ok.
Now check that your VM still boots. If it does not, then it is missing the virtio-scsi driver in the initrd. Reboot with the old configuration and build an initrd that includes all drivers, or at least the virtio-scsi driver. Another possible problem is the change from "/dev/vda1" to "/dev/sda1", check your fstab and use UUID or filesystem label for booting. Both problems did not occur to me on a stock Factory install (it uses UUID by default and had all drivers in initrd), but a hand-built kernel (built with "make localmodconfig"...) failed to boot, so be prepared.

Now you are using virtio-scsi for your device, but fstrim will still give you a "operation not supported" message. You'll need another parameter in your VM's configuration:
<driver name='qemu' type='raw' discard='unmap'/>
Restart the VM, and...
factory-vm:~ # fstrim -v /
/: 8,7 GiB (9374568448 bytes) trimmed
factory-vm:~ # 
Now what about thin-provisioning?
I converted the same VM from LV to a plain raw file.
This is the file on the host, it is sparse:
susi:/local/libvirt-images # ls -lh factory.raw
-rw-r----- 1 qemu qemu 20G Mar 15 14:05 factory.raw
susi:/local/libvirt-images # du -sh factory.raw
12G     factory.raw
Now let's delete some stuff inside the VM and run fstrim:
factory-vm:~ # du -sh /home/seife/linux-2.6/
3.9G    /home/seife/linux-2.6/
factory-vm:~ # rm -rf /home/seife/linux-2.6/
factory-vm:~ # fstrim -v /
/: 12.7 GiB (13579157504 bytes) trimmed
Checking again on the host:
susi:/local/libvirt-images # ls -lh factory.raw
-rw-r----- 1 qemu qemu 20G Mar 15 14:08 factory.raw
susi:/local/libvirt-images # du -sh factory.raw
6.4G    factory.raw
So this is really neat, as you now can free up space on the host after cleaning up in the VM. Maybe I should reconsider my "put all VMs into logical volumes" strategy again, as this wastes quite some valuable SSD space in my case.