Monday, December 26, 2016

Silent night - or "how I accidentally disabled email delivery"

My private email domains are hosted on a linux server where I have shell access (but not as root) which processes them with procmail, stores them locally and finally forwards them all to a professionally hosted email server with IMAP access and all that blinky stuff.
The setup is slightly convulted (aka "historically grown") but works well for me.

But the last days have been quiet on the email front. Not even the notorious spammers spamming my message-ids (how intelligent!) have apparently be trying to contact me. Now that's suspicious, so I decided to look into that.

A quick testmail from my gmail account did not seem to come through. Now the old test via telnet to port 25... had to look up the SMTP protocol, it's a long time ago I had to resort to this. First try: greylisting... come back later. Second try:
250 Ok: queued as F117E148DE4
Check the mails on the server: did not get through.

Now a few more words on the setup: as I wrote, all mail is forwarded to that professionally hosted IMAP server, where I read it usually with Thunderbird or, if things get bad, with the web frontend.
But since all emails are also stored on the server with shell access, I get them from there from time to time via imap-over-ssh, using fetchmail and the mailsync tool.

BTW, the fetchmail setup for such a thing is:
poll myacc via shellservername.tld with proto imap:
    plugin "ssh -C %h bin/imapd" auth ssh;
    user seife there is seife here options keep stripcr
    folders Mail/inbox Mail/s3e-spam Mail/thirdfolder
    mda "/usr/bin/procmail -f %F -d %T"
So while trying to check mail, I'm regularly running:
fetchmail && mailsync myacc
(first fetchmail, since it passes the mails to procmail which does the same folder-sorting as was done on the mail server already and is much faster than mailsync, which comes second to do the synchronization stuff: delete mails on the server that have been deleted locally etc.)
All looks normal, apart from no new mails arriving.
Until suddenly I noticed, that mailsync was synchronizing a folder named "spamassassin.lock". WTF?

 Investigating... On the server, there really is an (emtpy) mailbox named "Mail/spamassassin.lock".
Next place to look for is .procmailrc, and there it is: a rule like:

:0fw: spamassassin.lock
* < 1048576
| $HOME/perl/bin/spamassassin
And since everything in procmail apparently per default is relative to $MAILDIR, the lockfile was placed there. Probably a mailsync process came along, exactly at the moment the lockfile was existing and persisted it, and after that, no mail ever went past this point.

Solution was easy: remove the lockfile, make sure it does not get re-synchronized with next mailsync run and reconfigure procmail to use $HOME/spamassassin.lock instead. Now the silent times are over, spam is piling up again.

Saturday, December 24, 2016

Fix for "moto g always booting into recovery"

Today I reinstalled and wiped my old moto g (falcon) phone.
After all was done, it finally did no longer boot anywhere but into recovery -- no matter which recovery I flashed. It was still possible to boot into fastboot mode (Volume down + Power button), then select "normal system boot", but that's certainly not a good user experience on every power-on.
Additionally, the "charge battery when powered off" image was no longer working: plugging in power would also boot into recovery.

Some googling finally lead me to a xda-developers forum post which has the solution: there is a raw partition in the flash, which apparently stores the default boot option for the boot loader, just wiping this partition will restore the default boot order.

So when booted into recovery (must have adb enabled), just run
adb shell \ 
  dd if=/dev/zero \
from your computer (adb installed and USB cable connected, of course).
This should fix booting (it did for me).

Thursday, July 28, 2016

When "# needsrootforbuild" in OBS does not work...

...always remember, that you also need to change /usr/lib/obs/server/

# Allow to build as root, exceptions per package
# the keys are actually anchored regexes
our $norootexceptions = {
        "my-project/root-package" => 1,
        "dev-projects.*/other-package" => 1,
I already forgot that and wondered why it worked for "root-package", but not for "other-package" (which was not yet added...)

Monday, July 11, 2016

"Ghost" keystrokes with libvirt/KVM, SPICE and Windows guests

After offline resizing the image and file system of a Windows guest VM running on KVM, like this:

dd if=/dev/zero of=wxp.img bs=1M seek=10240 count=0
fdisk -c=dos wxp.img # resize partition, activate(!)
losetup -Pf wxp.img
ntfsresize /dev/loop0p1
losetup -d /dev/loop0

Windows (as expected) wanted to run a file system check on next boot. And on the following boot. And... every time.
I investigated and found out, that the CHKDSK prompted for "skip this check with any key press" and apparently a key was pressed at every boot, even though I did not touch anything.

Long story short: apparently the SPICE drivers, which this VM is using, are creating "ghost" devices and events during boot, which are interpreted as key presses by Windows. The solution was pretty simple: shut down the VM, switch the configuration from "SPICE server" to "VNC server", boot, wait for the CHKDSK to finish, shut down, switch back to "SPICE server".

Tuesday, June 28, 2016

My KIWI/OBS talk from oSC'16

Last Friday, at openSUSE Conference 2016, I was giving a talk together with Christian Schneemann about KIWI and OBS (the software is not able to manage "two speakers for one talk", this is why I am not listed in the schedule).

The slides from that talk are now available from the B1-Systems website.

Friday, November 27, 2015

Use your distro's kernel in OBS

The Open Build Service has the nifty feature that you can tell it to use a specific kernel to boot the worker VMs that build your software. To use that, you don't need any special setup, just a package which contains a kernel and an initrd:

   /.build.kernel.kvm # used by KVM workers
   /.build.kernel.xen # used by Xen workers

So you just need this package and make sure it is installed in the VM using the VMinstall: tag in the project config.
If the build service worker script detects that after preparing the VM, such a kernel and initrd are present, they will be used for booting the worker VM that finally builds your package or image. If it is *not* detected, then the kernel the worker server is running with (usually a SUSE kernel) will also be used for the VM.

In the openSUSE Buildservice instance, all "recent" SUSE distributions are configured for that: they use the kernel-obs-build package, which gets created automatically when building the kernel rpms.

Now I am right now using a buildservice instance for cross-distribution package- and imagebuilds. The challenges of trying to build RHEL/CentOS 7 images with KIWI in OBS warrant at least one additional blog post, but one thing I noticed was, that some of the kiwi stuff, when done with a CentOS 7 userland, apparently also needs a CentOS kernel, otherwise kiwi's parted calls, for example, will exit with code 1 (without issuing an error message, btw).
So I have built a kernel-obs-build from the CentOS 7 kernel and configured my OBS instance to use it, which brought me quite some steps further to building CentOS images with KIWI in OBS.
The code (or rather: the spec files) to "convert" the CentOS kernel to an OBS kernel is at, a short README on how  to use it is included.

Note that right now it only works with KVM workers as I was not able to get the worker code to boot the kernel correctly in a Xen VM, even though drivers are all there, the reason is probably that the obs worker scripts rely on some of the specifics of a Xen-specific kernel (e.g. the device name of the block devices being passed through to the VM from the config, which is not true for a generic PV-capable kernel).
But I guess this will improve soon, now that openSUSE has dropped the kernel-xen package, they will face the same issues and hopefully someone will fix them ;)

Sunday, July 12, 2015

Accessing my XFCE desktop with x11vnc

The following is probably old boring stuff for many, but I did not know it and it was astonishingly hard to google for it, so maybe it might be news for others, too.

This week I needed to access the desktop of my machine at home from the office. SSH access and X forwarding were not really sufficient options.
I remembered that a long time ago, KDE already had a "share this desktop" function, which would export the current desktop via VNC and even send an invitation with the credentials via email. As far as I know, GNOME has a similar feature. However, I'm using neither KDE nor GNOME but XFCE, and I could not find such a function. Additionally, I was not at the machine, so interactively setting up something was not really an option.
Finally I came across x11vnc. The short description says it all:
x11vnc allows one to view remotely and interact with real X displays (i.e. a display corresponding to a physical monitor, keyboard, and mouse) with any VNC viewer. In this way it plays the role for Unix/X11 that WinVNC plays for Windows.
This allows exactly what I needed. There is even a neat wrapper "x11vnc_ssh" in the openSUSE package that does the tunneling via SSH and everything else, so that all you need to do is:

  • log in to your target machine via ssh
  • call "x11vnc -storepasswd" (attention: this will store the password in ~/.vnc/passwd)
  • log out and from your "viewer machine" call "x11vnc_ssh username@targethost"
Note that with my default 13.2 setup, x11vnc_ssh does use invalid options for vncviewer, so either update it from the  X11:RemoteDesktop repository, or just remove all options from the vncviewer invocation on line 75 of /usr/bin/x11vnc_ssh, just leaving
if vncviewer :$port $3; then break; fi
That's all you need to do to comfortably access your running desktop!

Now as I initially wrote, this is not really "news", but I still did not know  it before.