Direct FIDO2/U2F Support in OpenSSH 8.2 on macOS

I'm a big fan of using hardware keys to secure important services, since they are even more secure that OTP tokens.

If you are currently using a Yubikey (or similar) to secure services, you will be happy to hear that starting today, you can use your hardware key directly with OpenSSH in FIDO2/U2F mode. The relevant PR was just merged earlier today and it works as expected:

  1. Be sure to update or install the Homebrew openssh package: $ brew upgrade openssh
  2. Insert your hardware key
  3. Generate a new SSH key using: $ ssh-keygen -t ecdsa-sk -f ~/.ssh/id_ecdsa_sk
  4. Find a new public/private key pair int .ssh. The private key will have some content, but should be useless without the hardware key attached.

It also supports ed25519-sk as key type, but this was not supported on the Yubikey 5 I used.

For more details see the official OpenSSH release log.

Wireguard-Go Binary for Use on Low-End OpenVZ Linux VPS

Just went through installing Wireguard on a very small 128MB RAM OpenVZ VPS. Just enough to use it as simple VPN.

I mostly followed this excellent guide, but couldn't compile Wireguard on the server due to a lack for RAM. 🤷‍

Rough steps:

  1. Install Debian 9 (Debian 10 didn't work due to firewall errors)
  2. Make sure tun/tap is enabled on the virtual server.
  3. Install wireguard-tools from the unstable repo
  4. Copy wireguard-go binary (linked below) to /usr/local/bin
  5. Generate private and public key for server and client
  6. Set systemd env var WG_I_PREFER_BUGGY_USERSPACE_TO_POLISHED_KMOD=1 in /lib/systemd/system/wg-quick@.service
  7. Add your server config to /etc/wireguard/wg0.conf
  8. Enable and start systemctl start wg-quick@wg0

Below my pre-compiled wireguard-go, if you just need something quick to drop on a low-end VPS.

This was built on Debian 10 and works just as well on Debian 9. Ubuntu could work too.

Reject Viruses from Logged-in Users in Rspamd

I'm a big fan of Rspamd for filtering your emails for spam. One thing I wanted to achieve is to reject viruses from logged-in users. This would mainly happen if a user machine is compromised. Rspamd makes this quite easy using the user settings module.

Using the snipped from that website, you can reject all emails with a score higher than e.g. 15. On the other hand, greylisting and adding a spam header is disabled for authenticated users.

authenticated {
    priority = high;
    authenticated = yes;
    apply {
        groups_disabled = ["rbl", "spf"];
        actions {
            reject = 15.0;
            greylist = null;
            "add header" = null;

I also recommend looking at the Mailcow Github repo. They make it easy to run your own mailserver and have very thoughful config files for Rspamd, Postfix and others.

Recognize Emails with Spoofed Sender

Recently I noticed a rise in spam emails with spoofed From-headers. This is unusual because almost all mail servers will require users to log in before sending emails. Below a typical example, which was already flagged as spam.

As you can see, the sender- and recipient addresses are the same in Apple Mail. The content implies that the user's mailbox was hacked.

When checking the server logs, I quickly noticed that the email was actually sent from a different sender-address than the one shown here. to be precise. But why won't this address show up as From address?

Turns out that it's possible to pass any From header in the email body, even though the SMTP MAIL FROM said something different. While this is surely suspicious, it's common practice for email services, like Amazon SES or Sendgrid.

If you wish to know the actual sender, you need to look at the Return-Path field also known as envelope sender or bounce address. It will have the actual sender address that was used for authentication. In my opinion it should also be displayed in the email client, if different from the From header. Below an example header with the relevant parts in bold.

Received: from ( [])
	by (Postfix) with ESMTPS id DADE6180385A
	for ; Sat, 23 Mar 2019 13:37:34 +0800 (+08)
Received: from [] (unknown [])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate)
	by (Postfix) with ESMTPSA id 33DBFF30D6DB
	for manu@***; Sat, 23 Mar 2019 14:37:13 +0900 (JST)
To: manu@***
X-aid: 0776676839
List-ID: 10f2hkdwzncc5z0xhusfi99g.iud3kqvly5b6il5czck95ezocwxr8kf5cdj
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20120824
Date: Sat, 23 Mar 2019 06:37:16 +0100
Feedback-ID: 58286146:06631151.375932:us93:ve
Subject: ***** SPAM 46.3 ***** manu
From: manu@***

Now, what can you do to prevent this kind of spamming? Spamassassin already has a rule called HEADER_FROM_DIFFERENT_DOMAINS that will trigger in those cases. Sadly, you can't give a high score to this rule, since legitimate services are sending for different domains all the time.

So the only option left is to educate your users about the Return-Path header and tell them to look at it when it's an important email.

Hosting Service for BorgBackup Repos – Free 100GB for Beta Testers

I have blogged about how great Borg is to back up servers and your Macbook while on the go. There just wasn't a good hosting service to stash your backup repos that took full advantage of all Borg features. Issues I saw with existing services, like Hetzner's Storagebox and

  • Only one single user. If a machine gets compromised they can access all your backups.
  • No support for append-only mode. An attacker could remove old backups.
  • Quotas per-account, not per-repo. If a backup goes wrong on one machine, it will fill up your whole account and stop other backups.

When looking at other "backup" solutions, like S3, B2, Dropbox or Google Drive, you will find those issues:

  • No compression or deduplication. You pay for the full size.
  • A sync service is no real backup because broken or cryptolocked files will be synced as well and the good copies lost.
  • Object storage services are great for many things, but there is no local file cache. So during each run the existing metadata is downloaded. This can be expensive when the provider charges you for API calls (S3).
  • No easy way to encrypt. With GDPR you are responsible for your data. An unencrypted cloud backup is a huge risk for a company.

To solve these problems I built The first storage service dedicated to Borg repos. It solves the above problems and allows admins to easily separate backups into different repos. Other features are:

  • Full encryption if you choose to use a key or password when setting up the repo. I will never see the files in your backup.
  • Compression and deduplication. Borg supports a range of compression algorithms. You can choose any one.
  • Economical. Only the compressed and deduplicated data counts against your total quota. So you get roughly 3x more mileage from each MB of storage.
  • Simple admin interface. Quickly add repos and SSH keys. Manage quotas and view current usage.
  • Monitoring. I want to be notified if backups stop working. Preferably before any data is lost. That's why you can set a monitoring interval and will get a notification if no backups are done during that time.
  • Configuration wizard. I always liked Github's copy+past commands to connect your local repo. So I added the same for Borg. The wizard lets you choose a repo and displays the relevant commands or a full Borgmatic file.

If you have experienced one or more of the above problems, I'd be happy to have you on board as beta tester. Just leave your email on and I'll send you a registration link early next week. All service (100GB storage and 5 repos) will be free during beta testing, which will last until mid-2019 or so.

Local and remote backups for macOS and Linux using BorgBackup


  • Oct 2018: there is now a more detailed guide available for macOS.
  • Sept 2018: there is now a hosting solution for Borg repos. See this post

When I recently switched my Macbook, I got very frustrated with Time Machine. I had used it for occasional local backups of my home folder and was planning to move my data from the new to the old machine.

Unfortunately, the Migration Assistant failed to even find my Time Machine drive and I ended up simply rsyncing everything from the Time Machine backup to a new user folder. After that was done I added a new user in macOS and it just ran a chmod over the whole folder.

After this experience it's clear that you could as well do your backups with any tool that backs up files, while saving yourself the trouble of Time Machine completely.

The best tool I found for the job is BorgBackup. It supports compression, deduplication, remote backups and many kinds of filters. Doing an incremental backup of 1m files takes about 5 minutes locally. Here are some rough steps to get you started:

  1. Install Borg via Homebrew brew cask install borgbackup  or Apt apt install borgbackup . If you plan on doing remote backups, Borg needs to be installed on the server as well.
  2. Initialize a new backup. For local backups:
    borg init --encryption=none /Volumes/external-disk/Backups/my-machine  (I'm not using encryption here because the drive is encrypted)For remote backups it's about the same:
    borg init --encryption=none 
  3. Next create a file called ~/.borg-filter . This will have the files you do NOT want to backup. An example:

    This will include some folders you can easily recreate.

  4. Last you should prepare a backup command. To backup specific important folders to a remote server, I use something like:
    function borg-backup() {
      NOW=$(date +"%Y-%m-%d_%H-%M")
      borg create -v --stats -C zlib --list --filter=AM --exclude-from ~/.borg-filter $BORG_REPO::$NOW \
        ~/Desktop \
        ~/Documents \
        ~/Pictures \
      borg prune -v --list $BORG_REPO --keep-daily=3 --keep-weekly=4 --keep-monthly=12

    This will backup my Desktop, Documents, Pictures and Fonts to a new time-stamped snapshot. The last command will rotate and delete old backups. The variable $BORG_BACKUP  has the repo name chosen in the previous step.

    Also be sure to read the official documentation to tune all options to your needs.

Ansible Playbook to set up Tensorflow and Keras on Ubuntu 16.04

Virtual machines with GPUs are expensive. The cheapest I found so fare are from Paperspace. Nevertheless, it can be useful to quickly reproduce your setup without the disadvantages of snapshots or golden master images.

When you manage your hosts with Ansible, you can just run the below playbook against it to set it up from scratch. I use some additional roles to set up my home folder or firewall rules.

Before running it, you need to mirror the cuDNN installer package, since Nvidia doesn't allow direct downloads. That part is marked in line 11.

Update all macOS Apps with a Single Command

Updates are important. While not as great as on Linux, you can use different package managers to manage all installed software and easily keep applications updated. Here the command I use. Requires homebrew, homebrew-cask-upgrade and mas.

function update-all {
    # update cli homebrew
    brew update
    brew upgrade
    brew prune
    brew cleanup

    # Homebrew cask (via
    brew cu -a -y --cleanup
    brew cleanup --force -s && rm -rf $(brew --cache)

    # Node
    npm update -g

    # Apple App store
    mas upgrade
    softwareupdate --install --all


Find sites vulnerable to WordPress Content Injection Vulnerability

WordPress' update cycle is reaching the speed of Windows XP. Even Google is sending out warnings, urging site owners to update.  For me they were not accurate, but there are still many vulnerable sites out there.

One could – for example – use Nerdydata to search the internet's source code for vulnerable WP versions. A simple search across their "Popular sites" dataset reveals close to 300 matches.

Regex used: ver=4.7(\.1)?' 

Using the same trick, you could also identify vulnerable WP installs you are managing. Here a GIST to a short Python script.

Optimize Spamassassin Detection Rate

Email is a terrible way to communicate and should be avoided where possible. Unfortunately it is also the lowest common denominator on the web and will continue to be for the near future.

In the early days of the internet it was easy to run your own mailserver. Due to the absurd quantity of spam this task got increasingly harder and many tech-savvy people gave up and switched to Gmail or other services. This is a pity because a decentralized email infrastructure is harder to surveil, subpoena or shut down. I encourage everyone to run their own mail service if possible.

In this guide I will summarize the steps needed to get an effective spamassassin (SA) setup. Continue reading

Cheaply retrieve data from Amazon AWS Glacier

When it launched Amazon Glacier was applauded for providing a super-cheap long-term storage solution. While there are no surprises when uploading and storing files, retrieving them can get expensive. The pricing reflects the fact that Amazon needs to retrieve your files from tape, which is expensive and takes a long time. Several users reported high charges after retrieving their backups. To its defence, Amazon published a very detailed FAQ on this topic.

The key to getting your files back on the cheap is time. You can retrieve 5% of your total files for free each month, but that amount is calculated per hour, rather than per month or day. Example: You keep 500GB in Glacier. 500GB*5%/30/24=36MB/hour.

That's great to know, but how can you keep retrieving 36MB for days or months without doing it manually? If you're on OSX or Linux you can use mt-aws-glacier. It's a Perl script to track and retrieve your Glacier files. Using mtglacier for slow retrieval isn't straight forward. There is an issue to improve it, but for now let's work with what's available.

The tool has two commands to restore files. One initiates the job on Amazon's side. After the job completes (takes about 4h), the files can be downloaded. With that in mind, we need to tell mtglacier to only request a few files and then sleep for another hour. There is no way to limit the file size to be requested, but the number of files can be limited. As long as you have many small files, this works great.

while true
  mtglacier restore --config vault.cfg --max-number-of-files 3
  mtglacier restore-completed --config vault.cfg
  sleep 3600

If your hourly allowance is 35MB and your average file size is 10MB, 3 files per hour is about right with some space for error.

Hope this helps.