Recognize Emails with Spoofed Sender

Recently I noticed a rise in spam emails with spoofed From-headers. This is unusual because almost all mail servers will require users to log in before sending emails. Below a typical example, which was already flagged as spam.

As you can see, the sender- and recipient addresses are the same in Apple Mail. The content implies that the user’s mailbox was hacked.

When checking the server logs, I quickly noticed that the email was actually sent from a different sender-address than the one shown here. katsu@kobatake.e-arc.jp to be precise. But why won’t this address show up as From address?

Turns out that it’s possible to pass any From header in the email body, even though the SMTP MAIL FROM said something different. While this is surely suspicious, it’s common practice for email services, like Amazon SES or Sendgrid.

If you wish to know the actual sender, you need to look at the Return-Path field also known as envelope sender or bounce address. It will have the actual sender address that was used for authentication. In my opinion it should also be displayed in the email client, if different from the From header. Below an example header with the relevant parts in bold.

Return-Path: katsu@kobatake.e-arc.jp
Received: from kobatake.e-arc.jp (kobatake.e-arc.jp [122.1.203.242])
	by mail.snapdragon.cc (Postfix) with ESMTPS id DADE6180385A
	for ; Sat, 23 Mar 2019 13:37:34 +0800 (+08)
Received: from [70.24.89.189.micron.com.br] (unknown [189.89.24.70])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate)
	by kobatake.e-arc.jp (Postfix) with ESMTPSA id 33DBFF30D6DB
	for manu@***; Sat, 23 Mar 2019 14:37:13 +0900 (JST)
To: manu@***
X-aid: 0776676839
List-ID: 10f2hkdwzncc5z0xhusfi99g.iud3kqvly5b6il5czck95ezocwxr8kf5cdj
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20120824
 Thunderbird/15.0
X-CSA-Complaints: complaints@kobatake.e-arc.jp
List-Subscribe: subscribe@kobatake.e-arc.jp
Date: Sat, 23 Mar 2019 06:37:16 +0100
Feedback-ID: 58286146:06631151.375932:us93:ve
Subject: ***** SPAM 46.3 ***** manu
Abuse-Reports-To: abuse@kobatake.e-arc.jp
Message-ID: 
From: manu@***
X-Sender: katsu@kobatake.e-arc.jp

Now, what can you do to prevent this kind of spamming? Spamassassin already has a rule called HEADER_FROM_DIFFERENT_DOMAINS that will trigger in those cases. Sadly, you can’t give a high score to this rule, since legitimate services are sending for different domains all the time.

So the only option left is to educate your users about the Return-Path header and tell them to look at it when it’s an important email.

Hosting Service for BorgBackup Repos – Free 100GB for Beta Testers

I have blogged about how great Borg is to back up servers and your Macbook while on the go. There just wasn’t a good hosting service to stash your backup repos that took full advantage of all Borg features. Issues I saw with existing services, like Hetzner’s Storagebox and rsync.net:

  • Only one single user. If a machine gets compromised they can access all your backups.
  • No support for append-only mode. An attacker could remove old backups.
  • Quotas per-account, not per-repo. If a backup goes wrong on one machine, it will fill up your whole account and stop other backups.

When looking at other “backup” solutions, like S3, B2, Dropbox or Google Drive, you will find those issues:

  • No compression or deduplication. You pay for the full size.
  • A sync service is no real backup because broken or cryptolocked files will be synced as well and the good copies lost.
  • Object storage services are great for many things, but there is no local file cache. So during each run the existing metadata is downloaded. This can be expensive when the provider charges you for API calls (S3).
  • No easy way to encrypt. With GDPR you are responsible for your data. An unencrypted cloud backup is a huge risk for a company.

To solve these problems I built BorgBase.com. The first storage service dedicated to Borg repos. It solves the above problems and allows admins to easily separate backups into different repos. Other features are:

  • Full encryption if you choose to use a key or password when setting up the repo. I will never see the files in your backup.
  • Compression and deduplication. Borg supports a range of compression algorithms. You can choose any one.
  • Economical. Only the compressed and deduplicated data counts against your total quota. So you get roughly 3x more mileage from each MB of storage.
  • Simple admin interface. Quickly add repos and SSH keys. Manage quotas and view current usage.
  • Monitoring. I want to be notified if backups stop working. Preferably before any data is lost. That’s why you can set a monitoring interval and will get a notification if no backups are done during that time.
  • Configuration wizard. I always liked Github’s copy+past commands to connect your local repo. So I added the same for Borg. The wizard lets you choose a repo and displays the relevant commands or a full Borgmatic file.

If you have experienced one or more of the above problems, I’d be happy to have you on board as beta tester. Just leave your email on BorgBase.com and I’ll send you a registration link early next week. All service (100GB storage and 5 repos) will be free during beta testing, which will last until mid-2019 or so.

Local and remote backups for macOS and Linux using BorgBackup

Updates:

  • Oct 2018: there is now a more detailed guide available for macOS.
  • Sept 2018: there is now a hosting solution for Borg repos. See this post

When I recently switched my Macbook, I got very frustrated with Time Machine. I had used it for occasional local backups of my home folder and was planning to move my data from the new to the old machine.

Unfortunately, the Migration Assistant failed to even find my Time Machine drive and I ended up simply rsyncing everything from the Time Machine backup to a new user folder. After that was done I added a new user in macOS and it just ran a chmod over the whole folder.

After this experience it’s clear that you could as well do your backups with any tool that backs up files, while saving yourself the trouble of Time Machine completely.

The best tool I found for the job is BorgBackup. It supports compression, deduplication, remote backups and many kinds of filters. Doing an incremental backup of 1m files takes about 5 minutes locally. Here are some rough steps to get you started:

  1. Install Borg via Homebrew brew cask install borgbackup  or Apt apt install borgbackup . If you plan on doing remote backups, Borg needs to be installed on the server as well.
  2. Initialize a new backup. For local backups:
    borg init –encryption=none /Volumes/external-disk/Backups/my-machine  (I’m not using encryption here because the drive is encrypted)For remote backups it’s about the same:
    borg init –encryption=none my-backup-host.net:/backups/my-machine 
  3. Next create a file called ~/.borg-filter . This will have the files you do NOT want to backup. An example:
    *.ab
    */.DS_Store
    */.tox
    /Users/manu/.cocoapods
    /Users/manu/.Trash
    /Users/manu/.pyenv/versions
    /Users/manu/.gem
    /Users/manu/.npm
    /Users/manu/.cpanm

    This will include some folders you can easily recreate.

  4. Last you should prepare a backup command. To backup specific important folders to a remote server, I use something like:
    function borg-backup() {
      NOW=$(date +"%Y-%m-%d_%H-%M")
      borg create -v --stats -C zlib --list --filter=AM --exclude-from ~/.borg-filter $BORG_REPO::$NOW \
        ~/Desktop \
        ~/Documents \
        ~/Pictures \
        ~/Library/Fonts
    
      borg prune -v --list $BORG_REPO --keep-daily=3 --keep-weekly=4 --keep-monthly=12
    }

    This will backup my Desktop, Documents, Pictures and Fonts to a new time-stamped snapshot. The last command will rotate and delete old backups. The variable $BORG_BACKUP  has the repo name chosen in the previous step.

    Also be sure to read the official documentation to tune all options to your needs.

Ansible Playbook to set up Tensorflow and Keras on Ubuntu 16.04

Virtual machines with GPUs are expensive. The cheapest I found so fare are from Paperspace. Nevertheless, it can be useful to quickly reproduce your setup without the disadvantages of snapshots or golden master images.

When you manage your hosts with Ansible, you can just run the below playbook against it to set it up from scratch. I use some additional roles to set up my home folder or firewall rules.

Before running it, you need to mirror the cuDNN installer package, since Nvidia doesn’t allow direct downloads. That part is marked in line 11.

Update all macOS Apps with a Single Command

Updates are important. While not as great as on Linux, you can use different package managers to manage all installed software and easily keep applications updated. Here the command I use. Requires homebrew, homebrew-cask-upgrade and mas.

function update-all {
    # update cli homebrew
    brew update
    brew upgrade
    brew prune
    brew cleanup

    # Homebrew cask (via https://github.com/buo/homebrew-cask-upgrade)
    brew cu -a -y --cleanup
    brew cleanup --force -s && rm -rf $(brew --cache)

    # Node
    npm update -g

    # Apple App store
    mas upgrade
    softwareupdate --install --all
}

 

Find sites vulnerable to WordPress Content Injection Vulnerability

WordPress’ update cycle is reaching the speed of Windows XP. Even Google is sending out warnings, urging site owners to update.  For me they were not accurate, but there are still many vulnerable sites out there.

One could – for example – use Nerdydata to search the internet’s source code for vulnerable WP versions. A simple search across their “Popular sites” dataset reveals close to 300 matches.

Regex used: ver=4.7(\.1)?’ 

Using the same trick, you could also identify vulnerable WP installs you are managing. Here a GIST to a short Python script.

Optimize Spamassassin Detection Rate

Email is a terrible way to communicate and should be avoided where possible. Unfortunately it is also the lowest common denominator on the web and will continue to be for the near future.

In the early days of the internet it was easy to run your own mailserver. Due to the absurd quantity of spam this task got increasingly harder and many tech-savvy people gave up and switched to Gmail or other services. This is a pity because a decentralized email infrastructure is harder to surveil, subpoena or shut down. I encourage everyone to run their own mail service if possible.

In this guide I will summarize the steps needed to get an effective spamassassin (SA) setup. Continue reading

Cheaply retrieve data from Amazon AWS Glacier

When it launched Amazon Glacier was applauded for providing a super-cheap long-term storage solution. While there are no surprises when uploading and storing files, retrieving them can get expensive. The pricing reflects the fact that Amazon needs to retrieve your files from tape, which is expensive and takes a long time. Several users reported high charges after retrieving their backups. To its defence, Amazon published a very detailed FAQ on this topic.

The key to getting your files back on the cheap is time. You can retrieve 5% of your total files for free each month, but that amount is calculated per hour, rather than per month or day. Example: You keep 500GB in Glacier. 500GB*5%/30/24=36MB/hour.

That’s great to know, but how can you keep retrieving 36MB for days or months without doing it manually? If you’re on OSX or Linux you can use mt-aws-glacier. It’s a Perl script to track and retrieve your Glacier files. Using mtglacier for slow retrieval isn’t straight forward. There is an issue to improve it, but for now let’s work with what’s available.

The tool has two commands to restore files. One initiates the job on Amazon’s side. After the job completes (takes about 4h), the files can be downloaded. With that in mind, we need to tell mtglacier to only request a few files and then sleep for another hour. There is no way to limit the file size to be requested, but the number of files can be limited. As long as you have many small files, this works great.

while true
do
  mtglacier restore --config vault.cfg --max-number-of-files 3
  mtglacier restore-completed --config vault.cfg
  sleep 3600
done

If your hourly allowance is 35MB and your average file size is 10MB, 3 files per hour is about right with some space for error.

Hope this helps.

Yahoo: Email not accepted for policy reasons

Yahoo failed as internet company for a reason. Try sending an email with a link to a bank website. E.g. CIMB (Popular across Asia)

http://www.cimb-bizchannel.com.my/index.php?ch=srvpack

Your email will be rejected by Yahoo. Just awesome…

554 Message not allowed - [PH01] Email not accepted for policy reasons

Workaround: Use a shortlink to hide your URL. E.g. http://goo.gl/tPb19A. Now your phishing emails will arrive safely. 😉

Download Uber ride history to Python Pandas

With Uber rides this cheap and self-driving cars around the corner, I doubt that future generations will have their own cars. Except for extreme use cases, like commuting from the countryside.

Personally I spent EUR 91 on Uber this year (2 months) and it got mit 260 km. That’s 0.31 EUR/km.

There is an API to download your rides, but getting receipts/prices didn’t work for me, so I had to scrape them from the website directly. Hope Uber allows getting prices from their API at some point.

This is the code I used. The Uber-logins aren’t automated yet.

New Release of invoice2data

Thanks to some awesome contributors, there is a new release for invoice2data. This Python package allows you to get structured data from PDF invoices. Major enhancements:

  • powerful Yaml-based template format for new invoice issuers.
  • improved date-parseing thanks to dateparser.
  • improved PDF conversion thanks to new feature in xpdf
  • better testing and CI
  • option to add multiple keywords and regex to each field
  • option to define currency and date format (day or month first?)

All details and download on Github.

Unit testing for Jupyter (iPython) notebooks

At Quantego, we do most high-level work that supports energy analysts in Jupyter Notebooks. This allows us to pull several Java and Python packages together for a highly productive work environment.

Sample notebooks are hosted on Github and distributed with our Docker images. Of course we prefer for our sample notebooks to work, when people run them. They also uncover potential problems, by running at a very high level and thus using almost all available features.

If you have a similar setup – for example in a data analytics-driven environment – the following could work for you as well:

  1. Make sure your notebooks run correctly when running “Run All”. When testing you may try different things and run cells out of order. For automatic testing to work, they should run all in sequence.
  2. Test locally with
    jupyter nbconvert --to=html --ExecutePreprocessor.enabled=True my-notebook.ipynb

    This will convert your notebooks to HTML. We’re not intersted in the output, only potential errors during conversion. This only works with Jupyter and iPython >=4. Previous versions simply ignore errors.

  3. Next you could just run the same command in an isolated Docker container or in a CI step
    docker run my_container /bin/sh -c \
      "/usr/local/bin/jupyter nbconvert \
          --to=html --ExecutePreprocessor.enabled=True \
          --ExecutePreprocessor.timeout=3600 \
          samples/my-sample.ipynb"

    A full working example for CircleCI can be found in our sample-repo.

Shell Function to Remove all Metadata from PDF

A handy function to remove all metadata from a PDF file. When done it will show all the remaining metadata for inspection. Needs pdftk and exiftool installed.

Combines commands from here and here. Good job, guys.

clean_pdf() {
 pdftk $1 dump_data | \
  sed -e 's/\(InfoValue:\)\s.*/\1\ /g' | \
  pdftk $1 update_info - output clean-$1
 
 exiftool -all:all= clean-$1
 exiftool -all:all clean-$1
 exiftool -extractEmbedded -all:all clean-$1
 qpdf --linearize clean-$1 clean2-$1
 
 pdftk clean2-$1 dump_data
 exiftool clean2-$1
 pdfinfo -meta clean2-$1
}

After adding this snippet in ~/.profile or copy and pasting it in the shell, you can just run

clean_pdf my-unclean.pdf

Incremental FTP backups

If you happen to only have FTP access to a server or account (CPanel) you’re looking after, LFTP is an efficient tool to keep incremental backups. This will make hard links of the previous backup and updated it, copying and storing only changed files.

#!/usr/bin/env bash
username='xxx'
password='xxx'
host='ftp.host'
localBackupDir='/backups/host'
remoteDir='/public_html/'
cd $localBackupDir
rm -rf backup.3
mv backup.2 backup.3
mv backup.1 backup.2
mv backup.0 backup.1
cp -al backup.1 backup.0 #-al or -r
lftp -e "set ssl:verify-certificate no; \
         mirror --only-newer --parallel=4 $remoteDir $localBackupDir/backup.0;\
         exit"\
     -u $username,$password $host