Hosting Service for BorgBackup Repos – Free 100GB for Beta Testers

I have blogged about how great Borg is to back up servers and your Macbook while on the go. There just wasn’t a good hosting service to stash your backup repos that took full advantage of all Borg features. Issues I saw with existing services, like Hetzner’s Storagebox and rsync.net:

  • Only one single user. If a machine gets compromised they can access all your backups.
  • No support for append-only mode. An attacker could remove old backups.
  • Quotas per-account, not per-repo. If a backup goes wrong on one machine, it will fill up your whole account and stop other backups.

When looking at other “backup” solutions, like S3, B2, Dropbox or Google Drive, you will find those issues:

  • No compression or deduplication. You pay for the full size.
  • A sync service is no real backup because broken or cryptolocked files will be synced as well and the good copies lost.
  • Object storage services are great for many things, but there is no local file cache. So during each run the existing metadata is downloaded. This can be expensive when the provider charges you for API calls (S3).
  • No easy way to encrypt. With GDPR you are responsible for your data. An unencrypted cloud backup is a huge risk for a company.

To solve these problems I built BorgBase.com. The first storage service dedicated to Borg repos. It solves the above problems and allows admins to easily separate backups into different repos. Other features are:

  • Full encryption if you choose to use a key or password when setting up the repo. I will never see the files in your backup.
  • Compression and deduplication. Borg supports a range of compression algorithms. You can choose any one.
  • Economical. Only the compressed and deduplicated data counts against your total quota. So you get roughly 3x more mileage from each MB of storage.
  • Simple admin interface. Quickly add repos and SSH keys. Manage quotas and view current usage.
  • Monitoring. I want to be notified if backups stop working. Preferably before any data is lost. That’s why you can set a monitoring interval and will get a notification if no backups are done during that time.
  • Configuration wizard. I always liked Github’s copy+past commands to connect your local repo. So I added the same for Borg. The wizard lets you choose a repo and displays the relevant commands or a full Borgmatic file.

If you have experienced one or more of the above problems, I’d be happy to have you on board as beta tester. Just leave your email on BorgBase.com and I’ll send you a registration link early next week. All service (100GB storage and 5 repos) will be free during beta testing, which will last until mid-2019 or so.

Local and remote backups for macOS and Linux using BorgBackup

Updates:

  • Oct 2018: there is now a more detailed guide available for macOS.
  • Sept 2018: there is now a hosting solution for Borg repos. See this post

When I recently switched my Macbook, I got very frustrated with Time Machine. I had used it for occasional local backups of my home folder and was planning to move my data from the new to the old machine.

Unfortunately, the Migration Assistant failed to even find my Time Machine drive and I ended up simply rsyncing everything from the Time Machine backup to a new user folder. After that was done I added a new user in macOS and it just ran a  chmod over the whole folder.

After this experience it’s clear that you could as well do your backups with any tool that backs up files, while saving yourself the trouble of Time Machine completely.

The best tool I found for the job is BorgBackup. It supports compression, deduplication, remote backups and many kinds of filters. Doing an incremental backup of 1m files takes about 5 minutes locally. Here are some rough steps to get you started:

  1. Install Borg via Homebrew  brew cask install borgbackup  or Apt  apt install borgbackup . If you plan on doing remote backups, Borg needs to be installed on the server as well.
  2. Initialize a new backup. For local backups:
    borg init --encryption=none /Volumes/external-disk/Backups/my-machine  (I’m not using encryption here because the drive is encrypted)For remote backups it’s about the same:
    borg init --encryption=none my-backup-host.net:/backups/my-machine 
  3. Next create a file called  ~/.borg-filter . This will have the files you do NOT want to backup. An example:

    This will include some folders you can easily recreate.
  4. Last you should prepare a backup command. To backup specific important folders to a remote server, I use something like:

    This will backup my Desktop, Documents, Pictures and Fonts to a new time-stamped snapshot. The last command will rotate and delete old backups. The variable  $BORG_BACKUP  has the repo name chosen in the previous step.

    Also be sure to read the official documentation to tune all options to your needs.

Ansible Playbook to set up Tensorflow and Keras on Ubuntu 16.04

Virtual machines with GPUs are expensive. The cheapest I found so fare are from Paperspace. Nevertheless, it can be useful to quickly reproduce your setup without the disadvantages of snapshots or golden master images.

When you manage your hosts with Ansible, you can just run the below playbook against it to set it up from scratch. I use some additional roles to set up my home folder or firewall rules.

Before running it, you need to mirror the cuDNN installer package, since Nvidia doesn’t allow direct downloads. That part is marked in line 11.

Update all macOS Apps with a Single Command

Updates are important. While not as great as on Linux, you can use different package managers to manage all installed software and easily keep applications updated. Here the command I use. Requires homebrew, homebrew-cask-upgrade and mas.

 

Find sites vulnerable to WordPress Content Injection Vulnerability

WordPress’ update cycle is reaching the speed of Windows XP. Even Google is sending out warnings, urging site owners to update.  For me they were not accurate, but there are still many vulnerable sites out there.

One could – for example – use Nerdydata to search the internet’s source code for vulnerable WP versions. A simple search across their “Popular sites” dataset reveals close to 300 matches.

Regex used:  ver=4.7(\.1)?' 

Using the same trick, you could also identify vulnerable WP installs you are managing. Here a GIST to a short Python script.

Optimize Spamassassin Detection Rate

Email is a terrible way to communicate and should be avoided where possible. Unfortunately it is also the lowest common denominator on the web and will continue to be for the near future.

In the early days of the internet it was easy to run your own mailserver. Due to the absurd quantity of spam this task got increasingly harder and many tech-savvy people gave up and switched to Gmail or other services. This is a pity because a decentralized email infrastructure is harder to surveil, subpoena or shut down. I encourage everyone to run their own mail service if possible.

In this guide I will summarize the steps needed to get an effective spamassassin (SA) setup. Continue reading

Cheaply retrieve data from Amazon AWS Glacier

When it launched Amazon Glacier was applauded for providing a super-cheap long-term storage solution. While there are no surprises when uploading and storing files, retrieving them can get expensive. The pricing reflects the fact that Amazon needs to retrieve your files from tape, which is expensive and takes a long time. Several users reported high charges after retrieving their backups. To its defence, Amazon published a very detailed FAQ on this topic.

The key to getting your files back on the cheap is time. You can retrieve 5% of your total files for free each month, but that amount is calculated per hour, rather than per month or day. Example: You keep 500GB in Glacier. 500GB*5%/30/24=36MB/hour.

That’s great to know, but how can you keep retrieving 36MB for days or months without doing it manually? If you’re on OSX or Linux you can use mt-aws-glacier. It’s a Perl script to track and retrieve your Glacier files. Using mtglacier for slow retrieval isn’t straight forward. There is an issue to improve it, but for now let’s work with what’s available.

The tool has two commands to restore files. One initiates the job on Amazon’s side. After the job completes (takes about 4h), the files can be downloaded. With that in mind, we need to tell mtglacier to only request a few files and then sleep for another hour. There is no way to limit the file size to be requested, but the number of files can be limited. As long as you have many small files, this works great.

If your hourly allowance is 35MB and your average file size is 10MB, 3 files per hour is about right with some space for error.

Hope this helps.

Yahoo: Email not accepted for policy reasons

Yahoo failed as internet company for a reason. Try sending an email with a link to a bank website. E.g. CIMB (Popular across Asia)

http://www.cimb-bizchannel.com.my/index.php?ch=srvpack

Your email will be rejected by Yahoo. Just awesome…

Workaround: Use a shortlink to hide your URL. E.g. http://goo.gl/tPb19A. Now your phishing emails will arrive safely. 😉

Download Uber ride history to Python Pandas

With Uber rides this cheap and self-driving cars around the corner, I doubt that future generations will have their own cars. Except for extreme use cases, like commuting from the countryside.

Personally I spent EUR 91 on Uber this year (2 months) and it got mit 260 km. That’s 0.31 EUR/km.

There is an API to download your rides, but getting receipts/prices didn’t work for me, so I had to scrape them from the website directly. Hope Uber allows getting prices from their API at some point.

This is the code I used. The Uber-logins aren’t automated yet.

New Release of invoice2data

Thanks to some awesome contributors, there is a new release for invoice2data. This Python package allows you to get structured data from PDF invoices. Major enhancements:

  • powerful Yaml-based template format for new invoice issuers.
  • improved date-parseing thanks to dateparser.
  • improved PDF conversion thanks to new feature in xpdf
  • better testing and CI
  • option to add multiple keywords and regex to each field
  • option to define currency and date format (day or month first?)

All details and download on Github.

Unit testing for Jupyter (iPython) notebooks

At Quantego, we do most high-level work that supports energy analysts in Jupyter Notebooks. This allows us to pull several Java and Python packages together for a highly productive work environment.

Sample notebooks are hosted on Github and distributed with our Docker images. Of course we prefer for our sample notebooks to work, when people run them. They also uncover potential problems, by running at a very high level and thus using almost all available features.

If you have a similar setup – for example in a data analytics-driven environment – the following could work for you as well:

  1. Make sure your notebooks run correctly when running “Run All”. When testing you may try different things and run cells out of order. For automatic testing to work, they should run all in sequence.
  2. Test locally with

    This will convert your notebooks to HTML. We’re not intersted in the output, only potential errors during conversion. This only works with Jupyter and iPython >=4. Previous versions simply ignore errors.
  3. Next you could just run the same command in an isolated Docker container or in a CI step

    A full working example for CircleCI can be found in our sample-repo.

Shell Function to Remove all Metadata from PDF

A handy function to remove all metadata from a PDF file. When done it will show all the remaining metadata for inspection. Needs pdftk and exiftool installed.

Combines commands from here and here. Good job, guys.

After adding this snippet in ~/.profile or copy and pasting it in the shell, you can just run

Incremental FTP backups

If you happen to only have FTP access to a server or account (CPanel) you’re looking after, LFTP is an efficient tool to keep incremental backups. This will make hard links of the previous backup and updated it, copying and storing only changed files.

Extend Pandas DataFrame with custom functions and attributes

At Quantego.com we love working with Pandas Dataframes. We use them to store and analyze results from simulation runs. On top of our data matrix and a multi-level index we also need to accommodate custom plotting functions and attributes from the previous simulation run.

Subclassing  pandas.DataFrame for this task was a no-brainer. The new version 0.16.1 (to be released in the next days) includes some fixes to make working with subclasses of complex data-frames (DF) easier. Here an example of what can be done. First define two new classes for  pandas.Series (single col DF) and  pandas.DataFrame . You can define new functions or attributes, as needed.

Notice  _constructor  and  _constructor_sliced . They make sure you get the correct class back, when slicing the DF.

Via  self  you have convenient access to all Pandas functions and can even roll your own.