Fully Unroot Custom Android ROMs to run Banking Apps

Certain banking apps and e-wallets refuse to start on custom ROMs. Unrooting doesn't seem to help much. The issue seems to arise from "insecure" settings in the ROM's properties file. To reconfirm you can use the Rootbeer app. I suspect many apps use their library to check for signs of root access.

If your ROM has "insecure" property settings, you can edit them to fully unroot the ROM via different properties files. For some ROMs this file is inside the boot image and you will need to extract and repack it. For other ROMs, like LineageOS, the boot image just includes a link to the /system partition and you can directly change that file.

These settings in default.prop can prevent banking apps from working:

ro.secure=0  # should be 1
ro.debuggable=1  # should be 0

If you are using LineageOS (or some related ROM) you can sometimes edit those values directly on /system:

$ mount -o rw,remount /system
$ nano /system/etc/prop.default

So much for the simple cases. If the properties file is located inside the boot image, you can follow those steps to unpack and update the boot image using a command line tool, magiskboot, that comes with Magisk.

1. Using the ADB tool on your computer, become root

$ adb root

2. Download Magisk, find magiskboot, copy to phone and change permissions

$ adb push Magisk-v19.3/arm/magiskboot /data/local/tmp

3. Shell into phone and find boot partition

$ adb shell
$ ls -l /dev/block/platform/soc/*/by-name/

4. Dump boot partition

cd /data/local/tmp
chmod 555 magiskboot
dd if=/dev/block/mmcblk0p21 of=boot.img

5. Unpack boot partition to current dir

mkdir repack; cd repack
../magiskboot unpack ../boot.img

6. Dump default.prop, make necessary edits and re-add to ramdisk

../magiskboot cpio ramdisk.cpio "extract default.prop default.prop"
nano default.prop  # make required edits and save.
../magiskboot cpio ramdisk.cpio "add 750 default.prop default.prop"

7. Repack boot image and write to partition

../magiskboot repack ../boot.img ../new-boot.img
dd if=new-boot.img of=/dev/block/mmcblk0p21

Resources

Using a Yubikey to Secure SSH on macOS (Minimalist Version)

SSH is critical in most people's devops process, be it remote server logins or Git commits. After reading about one too many stories about companies getting hacked that way, I decided to use Yubikeys to store my private SSH keys.

You can either use the PIV- or OpenPGP module for this purpose. I decided to use the former because it's better integrated and seems to be more reliable. There are a number of guides available online. They all required some tinkering and small adjustment for macOS. So here is my own complete guide.

Install Dependencies

Start by installing two required packages from Homebrew

$ brew install yubico-piv-tool opensc

Next you need to copy the OpenSC PKCS11 driver to a new location, so SSH-Agent can pick it up. By default Homebrew will symlink it, which does not work.

$ rm /usr/local/lib/opensc-pkcs11.so
$ cp $(brew list opensc | grep lib/opensc-pkcs11.so) /usr/local/lib/opensc-pkcs11.so

As a last setup step, add the following line to ~/.ssh/config, so SSH will pick it up when authenticating to a remote server. You can either add it at the top or below a Host example.com block to only apply to that host.

PKCS11Provider /usr/local/lib/opensc-pkcs11.so

Generate Private Keys and Store on Yubikey

You could generate the private key directly on the Yubikey and it will never leave the key. This is great for security but also means you can't make a backup or copy it to a second Yubikey as backup. For that reason we will securely generate a private SSH key on a RAM disk and then copy it to two Yubikeys.

Start by creating a RAM disk and going into the mount point

$ diskutil erasevolume HFS+ RAMDisk `hdiutil attach -nomount ram://2048`
$ cd /Volumes/RAMDisk

Next generate a new private RSA key (only this specific format and length is supported) and a public key and certificate in the correct format.

$ ssh-keygen -m PEM -t rsa -b 2048 -o -a 100 -C yubikey -f yubikey
$ ssh-keygen -e -f ./yubikey.pub -m PKCS8 > yubikey.pub.pkcs8
$ yubico-piv-tool -a verify-pin -a selfsign-certificate -s 9a -S "/CN=SSH key/" --valid-days=3650 -i yubikey.pub.pkcs8 -o cert.pem

You should now see four files on your RAM disk. The commands below will copy the private key to a Yubikey and also add the self-signed certificate. The last step is mostly to comply with the PIV standard and not really related to the SSH login we want. You can repeat this step for every additional Yubikey you want to seed with this particular private SSH key.

You can customize the touch- and PIN policy to your linking. The command below requires a touch whenever the key is used.

$ yubico-piv-tool -s 9a --pin-policy=once --touch-policy=always -a import-key -i yubikey
$ yubico-piv-tool -a verify -a import-certificate -s 9a -i cert.pem

Using the Yubikey for SSH Logins

Now you are ready to log in to a remote server using the private SSH key stored on the Yubikey. To test the new setup, add the public key to ~/.ssh/authorized_keys or any other place appropriate for the service you are using. You can view the public key using either of those commands, even after you remove the RAM disk.

$ cat ./yubikey.pub  # public key saved on RAM disk
$ ssh-keygen -D /usr/local/lib/opensc-pkcs11.so  # dump directly from Yubikey

After adding the public key to a test server, log in like this:

$ ssh -v -I /usr/local/lib/opensc-pkcs11.so

If it works, you will see those lines and the Yubikey will start flashing to signal it's waiting for a touch.

debug1: Offering public key: /usr/local/lib/opensc-pkcs11.so RSA SHA256:aeq9rAsbxxxxxxxFWG4 token agent
debug1: Server accepts key: /usr/local/lib/opensc-pkcs11.so RSA SHA256:aeq9rAsbxxxxxxFWG4 token agent

For convenience, you can link your hardware key with SSH-agent to avoid entering the PIN all the time. The first command will load the key, the second one will unload it. This will even survive prolonged hibernation. If someone removes the key or restarts the machine, a PIN will be required.

$ ssh-add -s /usr/local/lib/opensc-pkcs11.so  # add key
$ ssh-add -e /usr/local/lib/opensc-pkcs11.so  # remove key
$ ssh-add -L  # list available keys with public key

Now you should be ready to use the new, secure SSH key in production. Be sure to keep a backup on a second Yubikey in a save place and unmount the RAM disk after validating it works.

Here some usage ideas. You can use the key in any place that uses SSH.

  • SSH login to important production servers
  • Secure SSH proxy to a bastion inside a private network
  • Secure backups with BorgBase.com. You could set all server-keys as append-only and use the Yubikey for full access for pruning.
  • Login to a Git code repo. Be sure to use SSH, not HTTPS.

Resources

Recognize Emails with Spoofed Sender

Recently I noticed a rise in spam emails with spoofed From-headers. This is unusual because almost all mail servers will require users to log in before sending emails. Below a typical example, which was already flagged as spam.

As you can see, the sender- and recipient addresses are the same in Apple Mail. The content implies that the user's mailbox was hacked.

When checking the server logs, I quickly noticed that the email was actually sent from a different sender-address than the one shown here. katsu@kobatake.e-arc.jp to be precise. But why won't this address show up as From address?

Turns out that it's possible to pass any From header in the email body, even though the SMTP MAIL FROM said something different. While this is surely suspicious, it's common practice for email services, like Amazon SES or Sendgrid.

If you wish to know the actual sender, you need to look at the Return-Path field also known as envelope sender or bounce address. It will have the actual sender address that was used for authentication. In my opinion it should also be displayed in the email client, if different from the From header. Below an example header with the relevant parts in bold.

Return-Path: katsu@kobatake.e-arc.jp
Received: from kobatake.e-arc.jp (kobatake.e-arc.jp [122.1.203.242])
	by mail.snapdragon.cc (Postfix) with ESMTPS id DADE6180385A
	for ; Sat, 23 Mar 2019 13:37:34 +0800 (+08)
Received: from [70.24.89.189.micron.com.br] (unknown [189.89.24.70])
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate)
	by kobatake.e-arc.jp (Postfix) with ESMTPSA id 33DBFF30D6DB
	for manu@***; Sat, 23 Mar 2019 14:37:13 +0900 (JST)
To: manu@***
X-aid: 0776676839
List-ID: 10f2hkdwzncc5z0xhusfi99g.iud3kqvly5b6il5czck95ezocwxr8kf5cdj
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20120824
 Thunderbird/15.0
X-CSA-Complaints: complaints@kobatake.e-arc.jp
List-Subscribe: subscribe@kobatake.e-arc.jp
Date: Sat, 23 Mar 2019 06:37:16 +0100
Feedback-ID: 58286146:06631151.375932:us93:ve
Subject: ***** SPAM 46.3 ***** manu
Abuse-Reports-To: abuse@kobatake.e-arc.jp
Message-ID: 
From: manu@***
X-Sender: katsu@kobatake.e-arc.jp

Now, what can you do to prevent this kind of spamming? Spamassassin already has a rule called HEADER_FROM_DIFFERENT_DOMAINS that will trigger in those cases. Sadly, you can't give a high score to this rule, since legitimate services are sending for different domains all the time.

So the only option left is to educate your users about the Return-Path header and tell them to look at it when it's an important email.

Hosting Service for BorgBackup Repos – Free 100GB for Beta Testers

I have blogged about how great Borg is to back up servers and your Macbook while on the go. There just wasn't a good hosting service to stash your backup repos that took full advantage of all Borg features. Issues I saw with existing services, like Hetzner's Storagebox and rsync.net:

  • Only one single user. If a machine gets compromised they can access all your backups.
  • No support for append-only mode. An attacker could remove old backups.
  • Quotas per-account, not per-repo. If a backup goes wrong on one machine, it will fill up your whole account and stop other backups.

When looking at other "backup" solutions, like S3, B2, Dropbox or Google Drive, you will find those issues:

  • No compression or deduplication. You pay for the full size.
  • A sync service is no real backup because broken or cryptolocked files will be synced as well and the good copies lost.
  • Object storage services are great for many things, but there is no local file cache. So during each run the existing metadata is downloaded. This can be expensive when the provider charges you for API calls (S3).
  • No easy way to encrypt. With GDPR you are responsible for your data. An unencrypted cloud backup is a huge risk for a company.

To solve these problems I built BorgBase.com. The first storage service dedicated to Borg repos. It solves the above problems and allows admins to easily separate backups into different repos. Other features are:

  • Full encryption if you choose to use a key or password when setting up the repo. I will never see the files in your backup.
  • Compression and deduplication. Borg supports a range of compression algorithms. You can choose any one.
  • Economical. Only the compressed and deduplicated data counts against your total quota. So you get roughly 3x more mileage from each MB of storage.
  • Simple admin interface. Quickly add repos and SSH keys. Manage quotas and view current usage.
  • Monitoring. I want to be notified if backups stop working. Preferably before any data is lost. That's why you can set a monitoring interval and will get a notification if no backups are done during that time.
  • Configuration wizard. I always liked Github's copy+past commands to connect your local repo. So I added the same for Borg. The wizard lets you choose a repo and displays the relevant commands or a full Borgmatic file.

If you have experienced one or more of the above problems, I'd be happy to have you on board as beta tester. Just leave your email on BorgBase.com and I'll send you a registration link early next week. All service (100GB storage and 5 repos) will be free during beta testing, which will last until mid-2019 or so.

Local and remote backups for macOS and Linux using BorgBackup

Updates:

  • Oct 2018: there is now a more detailed guide available for macOS.
  • Sept 2018: there is now a hosting solution for Borg repos. See this post

When I recently switched my Macbook, I got very frustrated with Time Machine. I had used it for occasional local backups of my home folder and was planning to move my data from the new to the old machine.

Unfortunately, the Migration Assistant failed to even find my Time Machine drive and I ended up simply rsyncing everything from the Time Machine backup to a new user folder. After that was done I added a new user in macOS and it just ran a chmod over the whole folder.

After this experience it's clear that you could as well do your backups with any tool that backs up files, while saving yourself the trouble of Time Machine completely.

The best tool I found for the job is BorgBackup. It supports compression, deduplication, remote backups and many kinds of filters. Doing an incremental backup of 1m files takes about 5 minutes locally. Here are some rough steps to get you started:

  1. Install Borg via Homebrew brew cask install borgbackup  or Apt apt install borgbackup . If you plan on doing remote backups, Borg needs to be installed on the server as well.
  2. Initialize a new backup. For local backups:
    borg init --encryption=none /Volumes/external-disk/Backups/my-machine  (I'm not using encryption here because the drive is encrypted)For remote backups it's about the same:
    borg init --encryption=none my-backup-host.net:/backups/my-machine 
  3. Next create a file called ~/.borg-filter . This will have the files you do NOT want to backup. An example:
    *.ab
    */.DS_Store
    */.tox
    /Users/manu/.cocoapods
    /Users/manu/.Trash
    /Users/manu/.pyenv/versions
    /Users/manu/.gem
    /Users/manu/.npm
    /Users/manu/.cpanm

    This will include some folders you can easily recreate.

  4. Last you should prepare a backup command. To backup specific important folders to a remote server, I use something like:
    function borg-backup() {
      NOW=$(date +"%Y-%m-%d_%H-%M")
      borg create -v --stats -C zlib --list --filter=AM --exclude-from ~/.borg-filter $BORG_REPO::$NOW \
        ~/Desktop \
        ~/Documents \
        ~/Pictures \
        ~/Library/Fonts
    
      borg prune -v --list $BORG_REPO --keep-daily=3 --keep-weekly=4 --keep-monthly=12
    }

    This will backup my Desktop, Documents, Pictures and Fonts to a new time-stamped snapshot. The last command will rotate and delete old backups. The variable $BORG_BACKUP  has the repo name chosen in the previous step.

    Also be sure to read the official documentation to tune all options to your needs.

Ansible Playbook to set up Tensorflow and Keras on Ubuntu 16.04

Virtual machines with GPUs are expensive. The cheapest I found so fare are from Paperspace. Nevertheless, it can be useful to quickly reproduce your setup without the disadvantages of snapshots or golden master images.

When you manage your hosts with Ansible, you can just run the below playbook against it to set it up from scratch. I use some additional roles to set up my home folder or firewall rules.

Before running it, you need to mirror the cuDNN installer package, since Nvidia doesn't allow direct downloads. That part is marked in line 11.

Update all macOS Apps with a Single Command

Updates are important. While not as great as on Linux, you can use different package managers to manage all installed software and easily keep applications updated. Here the command I use. Requires homebrew, homebrew-cask-upgrade and mas.

function update-all {
    # update cli homebrew
    brew update
    brew upgrade
    brew prune
    brew cleanup

    # Homebrew cask (via https://github.com/buo/homebrew-cask-upgrade)
    brew cu -a -y --cleanup
    brew cleanup --force -s && rm -rf $(brew --cache)

    # Node
    npm update -g

    # Apple App store
    mas upgrade
    softwareupdate --install --all
}

 

Find sites vulnerable to WordPress Content Injection Vulnerability

WordPress' update cycle is reaching the speed of Windows XP. Even Google is sending out warnings, urging site owners to update.  For me they were not accurate, but there are still many vulnerable sites out there.

One could – for example – use Nerdydata to search the internet's source code for vulnerable WP versions. A simple search across their "Popular sites" dataset reveals close to 300 matches.

Regex used: ver=4.7(\.1)?' 

Using the same trick, you could also identify vulnerable WP installs you are managing. Here a GIST to a short Python script.

Optimize Spamassassin Detection Rate

Email is a terrible way to communicate and should be avoided where possible. Unfortunately it is also the lowest common denominator on the web and will continue to be for the near future.

In the early days of the internet it was easy to run your own mailserver. Due to the absurd quantity of spam this task got increasingly harder and many tech-savvy people gave up and switched to Gmail or other services. This is a pity because a decentralized email infrastructure is harder to surveil, subpoena or shut down. I encourage everyone to run their own mail service if possible.

In this guide I will summarize the steps needed to get an effective spamassassin (SA) setup. Continue reading

Cheaply retrieve data from Amazon AWS Glacier

When it launched Amazon Glacier was applauded for providing a super-cheap long-term storage solution. While there are no surprises when uploading and storing files, retrieving them can get expensive. The pricing reflects the fact that Amazon needs to retrieve your files from tape, which is expensive and takes a long time. Several users reported high charges after retrieving their backups. To its defence, Amazon published a very detailed FAQ on this topic.

The key to getting your files back on the cheap is time. You can retrieve 5% of your total files for free each month, but that amount is calculated per hour, rather than per month or day. Example: You keep 500GB in Glacier. 500GB*5%/30/24=36MB/hour.

That's great to know, but how can you keep retrieving 36MB for days or months without doing it manually? If you're on OSX or Linux you can use mt-aws-glacier. It's a Perl script to track and retrieve your Glacier files. Using mtglacier for slow retrieval isn't straight forward. There is an issue to improve it, but for now let's work with what's available.

The tool has two commands to restore files. One initiates the job on Amazon's side. After the job completes (takes about 4h), the files can be downloaded. With that in mind, we need to tell mtglacier to only request a few files and then sleep for another hour. There is no way to limit the file size to be requested, but the number of files can be limited. As long as you have many small files, this works great.

while true
do
  mtglacier restore --config vault.cfg --max-number-of-files 3
  mtglacier restore-completed --config vault.cfg
  sleep 3600
done

If your hourly allowance is 35MB and your average file size is 10MB, 3 files per hour is about right with some space for error.

Hope this helps.

Yahoo: Email not accepted for policy reasons

Yahoo failed as internet company for a reason. Try sending an email with a link to a bank website. E.g. CIMB (Popular across Asia)

http://www.cimb-bizchannel.com.my/index.php?ch=srvpack

Your email will be rejected by Yahoo. Just awesome...

554 Message not allowed - [PH01] Email not accepted for policy reasons

Workaround: Use a shortlink to hide your URL. E.g. http://goo.gl/tPb19A. Now your phishing emails will arrive safely. 😉

Download Uber ride history to Python Pandas

With Uber rides this cheap and self-driving cars around the corner, I doubt that future generations will have their own cars. Except for extreme use cases, like commuting from the countryside.

Personally I spent EUR 91 on Uber this year (2 months) and it got mit 260 km. That's 0.31 EUR/km.

There is an API to download your rides, but getting receipts/prices didn't work for me, so I had to scrape them from the website directly. Hope Uber allows getting prices from their API at some point.

This is the code I used. The Uber-logins aren't automated yet.

New Release of invoice2data

Thanks to some awesome contributors, there is a new release for invoice2data. This Python package allows you to get structured data from PDF invoices. Major enhancements:

  • powerful Yaml-based template format for new invoice issuers.
  • improved date-parseing thanks to dateparser.
  • improved PDF conversion thanks to new feature in xpdf
  • better testing and CI
  • option to add multiple keywords and regex to each field
  • option to define currency and date format (day or month first?)

All details and download on Github.

Unit testing for Jupyter (iPython) notebooks

At Quantego, we do most high-level work that supports energy analysts in Jupyter Notebooks. This allows us to pull several Java and Python packages together for a highly productive work environment.

Sample notebooks are hosted on Github and distributed with our Docker images. Of course we prefer for our sample notebooks to work, when people run them. They also uncover potential problems, by running at a very high level and thus using almost all available features.

If you have a similar setup – for example in a data analytics-driven environment – the following could work for you as well:

  1. Make sure your notebooks run correctly when running "Run All". When testing you may try different things and run cells out of order. For automatic testing to work, they should run all in sequence.
  2. Test locally with
    jupyter nbconvert --to=html --ExecutePreprocessor.enabled=True my-notebook.ipynb

    This will convert your notebooks to HTML. We're not intersted in the output, only potential errors during conversion. This only works with Jupyter and iPython >=4. Previous versions simply ignore errors.

  3. Next you could just run the same command in an isolated Docker container or in a CI step
    docker run my_container /bin/sh -c \
      "/usr/local/bin/jupyter nbconvert \
          --to=html --ExecutePreprocessor.enabled=True \
          --ExecutePreprocessor.timeout=3600 \
          samples/my-sample.ipynb"

    A full working example for CircleCI can be found in our sample-repo.