Shell Function to Remove all Metadata from PDF

A handy function to remove all metadata from a PDF file. When done it will show all the remaining metadata for inspection. Needs pdftk and exiftool installed.

Combines commands from here and here. Good job, guys.

clean_pdf() {
 pdftk $1 dump_data | \
  sed -e 's/\(InfoValue:\)\s.*/\1\ /g' | \
  pdftk $1 update_info - output clean-$1
 exiftool -all:all= clean-$1
 exiftool -all:all clean-$1
 exiftool -extractEmbedded -all:all clean-$1
 qpdf --linearize clean-$1 clean2-$1
 pdftk clean2-$1 dump_data
 exiftool clean2-$1
 pdfinfo -meta clean2-$1

After adding this snippet in ~/.profile or copy and pasting it in the shell, you can just run

clean_pdf my-unclean.pdf

Incremental FTP backups

If you happen to only have FTP access to a server or account (CPanel) you're looking after, LFTP is an efficient tool to keep incremental backups. This will make hard links of the previous backup and updated it, copying and storing only changed files.

#!/usr/bin/env bash
cd $localBackupDir
rm -rf backup.3
mv backup.2 backup.3
mv backup.1 backup.2
mv backup.0 backup.1
cp -al backup.1 backup.0 #-al or -r
lftp -e "set ssl:verify-certificate no; \
         mirror --only-newer --parallel=4 $remoteDir $localBackupDir/backup.0;\
     -u $username,$password $host

Extend Pandas DataFrame with custom functions and attributes

At we love working with Pandas Dataframes. We use them to store and analyze results from simulation runs. On top of our data matrix and a multi-level index we also need to accommodate custom plotting functions and attributes from the previous simulation run.

Subclassing pandas.DataFrame for this task was a no-brainer. The new version 0.16.1 (to be released in the next days) includes some fixes to make working with subclasses of complex data-frames (DF) easier. Here an example of what can be done. First define two new classes for pandas.Series (single col DF) and pandas.DataFrame . You can define new functions or attributes, as needed.

class CustomSeries(pandas.Series):
    def _constructor(self):
        return CustomSeries

    def custom_series_function(self):
        return 'OK'

class CustomDataFrame(pandas.DataFrame):
    "My custom dataframe"
    def __init__(self, *args, **kw):
        super(CustomDataFrame, self).__init__(*args, **kw)

    def _constructor(self):
        return CustomDataFrame

    _constructor_sliced = CustomSeries

    def custom_frame_function(self):
        return 'OK'

Notice _constructor  and _constructor_sliced . They make sure you get the correct class back, when slicing the DF.

Via self  you have convenient access to all Pandas functions and can even roll your own.

Regex to find phone numbers in every format

I couldn't find a truly universal regular expression (regex) to match phone numbers, no matter from which country and in which format. They all seemed to be limited in some way. Even named entity extraction APIs require you to set a country to find phone numbers.

In the end I rolled my own regex. It simply looks for a certain amount of numbers and characters generally used to make phone numbers human-readable. If you are looking to match longer or shorter numbers, you can just change the quantifiers. Some examples it will match:

540 297 1860
0090 530 229 12 04
+66 (0) 28340463
058 218 0600
03- 7722 5012
+62 – 21 – 5694 2002
+34 918 380 082
+90 532 643 34 34
+7 495 228 3513
+ 7 702 270 38 13 + 7 777

While not matching:


And here the regex:


To use it in Python

import re

rex = '(?!.*[a-zA-Z\,:])(?=(\D*\d){7,14})([\+\d\(]{1,2}.{6,23}\d)'
numbers = re.findall(rex, str_with_phone_numbers)


Let me know, if this is useful for you or if you find space for improvement. Currently the biggest issue I see is that the matching ranges between numbers and total chars are unrelated. Due to many filling chars higher values are needed. Those can lead to false negatives. Best test it for yourself.

Python clipboard access

I was using Python and Jinja2 to generate some tables with 100+ rows for WordPress. This package saved me the extra step to open a file and copy+paste it from there.

There should be many other uses to integrate it into semi-automated workflows.

Check it out here.

Extract structured data from PDF invoices

Most invoices exist in electronic format. They are generated from structured data and need to be entered as structured data. It's a shame that we still need humans to manually extract data points, like amount, date or issuer from it.

In the last days, I tried a few online invoicing solutions, like shoeboxed, but none of them does a good job at automatically recognizing new invoices. Some do it manually and charge accordingly.

Currently I don't see a way to automatically get the data. PDFs are simply not made for this. the best we can do is to add templates for a specific invoice format and use that to extract the data. I have created a proof of concept library, which is open source on github.

If you have any thoughts on what to improve or would like to extend this to use it in a production accounting, let me know.

Scalable Docker Monitoring with Fluentd, Elasticsearch and Kibana 4

Screen Shot 2014-11-20 at 14.38.27

Docker is a great set of technologies. Once you are comfortable with using it, you are presented with a set of challenges, you didn't have before. To name some:

  • log consolidation: How to retrieve log files from dozens of containers?
  • monitoring: How much RAM and CPU is each container using?

There are a few articles on this topic out there. After reading them none of the solutions really hit me, but they all had some nice features which I chose to combine here. Continue reading

Linksnappy Command Line Downloader (Python)

Simple Python script to download files via Linksnappy.

#! /usr/bin/env python

import requests
import json
import sys

USERNAME = 'my username'
PASSWORD = 'my pass'

params = {'link': sys.argv[1],
'type': '',
'username': USERNAME,
'password': PASSWORD}

resp ='',
data={'genLinks': json.dumps(params)})

url = json.loads(resp.text)['links'][0]['generated']

local_filename = url.split('/')[-1]

r = requests.get(url, stream=True)
    with open(local_filename, 'wb') as f:
        for chunk in r.iter_content(chunk_size=1024):
            if chunk: # filter out keep-alive new chunks
print local_filename

SSLv3 no longer supported

I had SSLv3 disabled for HTTP for quite some time. In the light of recent event, it is now also disabled for IMAP and SMTP. If you run into any trouble, let us know or update your clients.

Online iPython Notebook Viewer

We recently started using the slide function of iPython notebooks. Basically it allows you to partition your notebook onto different slides, slide fragments and subslides. Those can be exported to reveal.js

There is already a great viewer for notebooks on To save some steps in exporting, converting and adding reveal.js, I took the idea and added a slide viewer. Anyone can use it to link to their slides on Github, Gist or any other place. We even support Basic Auth. Check it out at:


Access Docker container attributes in Ansible

Ansible is a great automation solution. I mainly use it to provision servers and launch Docker instances on them. Sometimes I need container attributes, like PID or Port to configure Nginx or monitoring tools.

While the Ansible documentation gives you some hints, I didn't find it 100% obvious on how to solve this. Basically all your newly-created containers will end up in a list called docker_containers. It has the same structure as docker inspect.

For the PID:


For the host port:


So you could add a PID-file for a container like this:

 - copy: dest="/var/run/{{ image_name }}.pid"

Also read the full docs here.