Python PostgreSQL Driver Authors Hate You
July 26th, 2009

DBAPI2 is all well and good. To a point. But if you have the usual website scaling problem, namely the one where you have a master database that worked fine when you were tiny and—dear god—not so well right now, the idea of interchangeable database libraries is basically a crock.

Before I am inundated with hate mail let me dial back my rhetoric a little bit. The existence of an API that works with altogether different databases is a wonderful thing and without it, things like Django or SQLAlchemy would not be possible. So rest assured I am not a complete maniac. I am not even really here to talk about dbapi2. I am just saying that 1) no two libraries are the same, 2) given sufficient scale this will matter to you, 3) the devil is in the details, and 4) the devil likes screwing things with white hot pokers.

Database client drivers intended for the same database can do drastically different things. By Python standards, the Postgres driver situation is completely schizo. There are a lot of them available - there are five dedicated Postgres drivers listed on the wiki, as opposed to just one for MySQL. People might choose different drivers for licensing reasons, for religious reasons, randomly (because they never did any analysis like I am about to do), or for completely inscrutable reasons because they are just plain out of their minds. You really would not believe how much blood I have seen spilled over Postgres client drivers.

Here, let me show you what I am talking about. Examine the following python program, which runs an identical operation on a pyPgSQL connection and a psycopg2 connection.

    #! /usr/bin/env python
    from __future__ import with_statement
    from contextlib import closing
    from pyPgSQL import PgSQL as pypgsql
    import psycopg2
    test_dsn = 'host= port=5432 user=dan dbname=postgres'
    def test_select(c):
        with closing(c.cursor()) as cr:
            cr.execute('select 1')
            print cr.fetchall()
    def test():
        with closing(pypgsql.connect(test_dsn)) as c:
        with closing(psycopg2.connect(test_dsn)) as c:
    if __name__ == '__main__':


Here’s what happens when pyPgSQL runs that select.

select version()
DECLARE "PgSQL_0062AF80" CURSOR FOR select 1
SELECT typname, -1 , typelem FROM pg_type WHERE oid = 23
CLOSE "PgSQL_0062AF80"

First note that pyPgSQL issues a SELECT VERSION() command for every new connection. Why’s it do that? Well since I’ve already dug through the source I can tell you that it does this to see if it has to do something wacky for PostgreSQL 7.1 and below. There’s no way to disable this without patching the library.

This is not an enormous problem if the connection is pooled and reused, but it immediately becomes one if you want to use an out-of-process pool like PgBouncer. Every pyPgSQL connection that you make to pgbouncer will run this query, and in that scenario you are probably making zillions.

We get a transaction, even though I don’t remember having asked for one. Since we never commit, it’s rolled back. As default behavior this bites—more on this in a bit.

Finally, get a load of this:

SELECT typname, -1 , typelem FROM pg_type WHERE oid = 23;

That is pyPgSQL asking postgres what the name of the type associated with OID 23 is. I can tell you what it is without looking, it’s an int4. The OID’s of built-in types are hardcoded (see catalog/pg_types.h in the postgres source), so this is worse than pointless.

Well that was a shitshow. Now, what does psycopg2 do?

SHOW client_encoding
SHOW default_transaction_isolation
select 1

This is marginally more acceptable. Which, to my point, means it’s radically different. Right off the bat it is obvious that psycopg2 does not really support cursors. Apparently you can argue either side of this, but regardless it’s a significant difference between the two libraries.

It would be better if there were a way to tell psycopg2 what the client encoding and default transaction isolation levels are, rather than have it query this with each connection. Again, this is fine unless you want to use an external connection pool. Note that READ COMMITTED is the default, which makes the SET unecessary, but it is issued anyway. (And since Postgres only really supports two isolation levels, it would be likewise pointless if the server setting were READ UNCOMMITTED).

As with pyPgSQL, the transaction-as-default-behavior thing is thoroughly brutal. As far as I can tell, this is not something the DBAPI2 PEP demands. There are many reasons why I think this is a bad idea, but they all boil down to the fact that transactions are not free. If you’re executing a stored procedure, or just reading data, this boilerplate is superfluous. And if you want to get the most out of your database, you will have to turn this off. The syntax for disabling it is, of course, totally different and underdocumented in both libraries.

Note that PDO does nothing like this. PDO expects you to know what a transaction is. It gives you a prepared statement that you might not want, but that’s a different problem.

If you’ve read this far, maybe you’d like to know what my advice is. Well, first of all, do not take the choice of driver lightly, and understand what you’re getting into. You almost definitely want to use psycopg2, because on balance it is the least evil option [1]. If it comes to it, you can always patch out the unnecessary SHOW statements (although honestly, I’m not sure this will ever become an issue like pyPgSQL’s stupid selects). However, you should take care to manage your use of transactions from the start of your project. That is the kind of thing that will be really painful to change after you have hundreds of queries implicitly relying on the default behavior.

[1] Also note that the pyPgSQL source mixes tabs and spaces. FML.