Tuesday, September 13, 2011

Benchmarking ZFS, XFS, ext4 and btrfs with PostgreSQL 9.0 and 9.1

Benchmarking ZFS, XFS, ext4 and btrfs with PostgreSQL 9.0 and 9.1

A while ago I posted the results of benchmarking PostgreSQL against various filesystems, with various mount options.
Today I've run ZFS up against the PostgreSQL benchmark, along with btrfs, ext4 and xfs.

I ran the results on Ubuntu 11.04 with Pg 9.0 and Ubuntu 11.10 beta with Pg 9.1. In the following results, the first combo is called "natty" and the second one "oneiric".

The latter combination showed a considerable performance improvement overall - although I didn't investigate to find out whether this was due to kernel improvements, postgres improvements, or virtio improvements. (Since I was running these tests in a virtual machine, albeit one backed by striped RAID disks with caching disabled in KVM).

The results are measured in transactions-per-second, with higher numbers being better.

ext4 (data=writeback,relatime):
    natty: 248
  oneiric: 297

ext4 (data=writeback,relatime,nobarrier):
    natty: didn't test
  oneiric: 1409

xfs (relatime):
    natty: didn't test
  oneiric: 171

btrfs (relatime):
    natty: 61.5
  oneiric: 91

btrfs (relatime,nodatacow):
    natty: didn't test
  oneiric: 128

zfs (defaults):
    natty: 171
  oneiric: 996


Conclusion:
Last time I ran these tests, xfs and ext4 pulled very similar results, and both were miles ahead of btrfs. This time around, ext4 has managed to get a significantly faster result than xfs. However we have a new contender - ZFS performed extremely well on the latest Ubuntu setup - achieving triple the performance of regular ext4!
I am suspicious that ZFS may not be using any "barrier" coding though - and if ext4 has those disabled, it surpasses even ZFSs high score. Or maybe it's because ZFS (according to Wikipedia) will enable the write caching on drives and then uses a write system that is supposedly safe against failures?

Oddly, ZFS performed wildly differently on ubuntu 11.04 vs 11.10b. I can't explain this. Any ideas?

Cheers,
Toby

Calculate billionth birth-seconds

A while ago Ingy döt Net introduced me to the concept of the billionth birth-second.
This is a celebration of when your lifespan reaches one billion seconds since birth.

I have created a website to help people easily calculate when this occurs, which is here:
Calculate your billionth birth-second

If you're interested, the source code is available here:
https://github.com/TJC/calculate-birthsecond

Wednesday, July 6, 2011

Why client-side verification is bad..

Words with bastards

As a programmer, you should really know that you can't trust anything you put in the hands of someone else. If you're writing a web, mobile or desktop application that talks to your server, you can't trust that someone won't subvert the local software. As such, you can't rely on it to do authentication or parameter verification. (As noted in the comments below, it's not a sin to perform checks on the client side -- just don't *trust* that they were performed.)

A common mistake is the web site that relies on javascript to ensure a form is filled in correctly, but which then doesn't re-validate the results on the server, in case the javascript was bypassed. However the problem extends to mobile or desktop applications that communicate with a remote server.

I recently looked into Zynga's "Words with friends" mobile application, and to my surprise discovered that it did client-side verification of the words you are playing.

It wasn't very hard to circumvent this, resulting in a version which accepts moves like the above one..

Thursday, April 21, 2011

Effects of filesystems and mount options upon PostgreSQL performance

I have done some testing on different filesystems and mount options and how they effect PostgreSQL performance.

The performance was measured with pgbench, using the following script:


createdb bench
/usr/lib/postgresql/8.4/bin/pgbench -i -s 10 bench
sleep 5
/usr/lib/postgresql/8.4/bin/pgbench -c 10 -T 60 bench
dropdb bench


The tests were performed on a 3GHz quad-core Intel i7 CPU with a triplet of 7200rpm SATA drives in a RAID-0 configuration.

These are the results -- note that test runs varied slightly between runs even of the same options.. by 2-3 units on the ext4 tests, and probably the same percentage (1%) on others.

Score is in average transactions-per-second. Higher values are better.


ext4:
235 data=ordered,strictatime
231 data=ordered,relatime
235 data=ordered,noatime
231 data=writeback,strictatime
235 data=writeback,relatime
238 data=writeback,noatime
235 data=writeback,noatime,commit=999
2392 data=writeback,noatime,barrier=0

xfs:
231 relatime,noquota
227 noatime,noquota
2439 noatime,noquota,nobarrier

btrfs:
62 defaults
67 nodatacow
69 nodatacow,noatime


My conclusions are:
1) btrfs sucks for database use, even with the recommended flags set for database use.

2) ext4 performs best with the recommended-for-db-use flags set (ie. data=writeback,noatime).
However the gains aren't massive over the defaults (which are data=ordered,relatime).

3) Disabling barriers gives a MAHOOSIVE performance increase, although it's noted that you should only do this if you have drives or raid cards with battery-backed-up write caches. (I don't, but your production systems usually do. Obviously, confirm this and test it by pulling the plug before actually changing this option in production though!)

Tuesday, March 8, 2011

Profiling Perl with SystemTap on Linux

Perl gained some support for Sun's DTrace profiling system in version 5.10, and 5.12 added support for SystemTap, albeit only via systemtap's dtrace compatibility layer.

I tried using the Linux port of dtrace, but it just hung my machine, so I wanted to try SystemTap instead.

I struggled to find *any* documentation on how to do this with Perl though, so here are my notes on eventually getting it all working.
I'm still struggling to get custom marks in your Perl apps to work though, they seem too closely tied to dtrace's architecture.

These notes apply to Ubuntu Maverick 10.10 64bit.


1)
Grab kernel files from this PPA:
https://launchpad.net/~speijnik/+archive/utrace-kernel
(You'll need both the linux-image and linux-headers)

2)
sudo aptitude install systemtap systemtap-sdt-dev systemtap-doc

2b)
Note, I think I needed to run this too..
sudo make -C /usr/share/systemtap/runtime/uprobes

3)
Add your user to the stapdev and stapusr groups:
sudo usermod -a -G stapdev,stapusr tobyc

4)
Install a custom perl with -Dusedtrace enabled
eg. perlbrew install perl-5.12.3 -D usedtrace -D usethreads

5)
Create perl.stp containing:

probe process("perl").mark("sub__entry") {
printf("%s: pid %d, uid %d\n", execname(), pid(), uid())
printf("-> %s (%s:%d)\n", user_string($arg1), user_string($arg2), $arg3)
}
probe process("perl").mark("sub__return") {
printf("<- %s\n", user_string($arg1))
}


6)
Run this command in one window:
stap perl.stp
and then in another window, run a Perl app.. (Make sure it runs using the custom Perl you built in step 4 though!)
You should see info being dumped about subroutines being entered and left..