PDO vs. MySQLi

Suppose this: I’m going to be developing something that i know always will use a MySQL database. I have read everything I can find on both PDO and MySQLi, but I can’t find any benchmarks, arguments for/against any of them (basicly, everything written on these two extensions is only about one of them, and never about any comparsions what so ever).

It seems as if MySQLi supports alot more features then PDO? Master/Slave setups, Multi-Queries, SSL Connection and Unbuffered Resulsts (it seems as if PDO should supports this also, but I couldn’t find any absolute info about it, and no constants, options or methods supporting this fact?) to mention a few.

So, considering an all mysql application - what extension would you choose, and why? (I know all about MDB2, AdoDB, Pear_DB - my question has nothing to do with any of these).

I think I have read somewhere that all PDO_Mysql queries are unbuffered.

Though if I am wrong someone please correct me

It’s the default setting – You can change that. The reason is, that not all databases supports buffered queries, so for portability, it’s disabled.
There were some issues with buffered queries on certain platforms in the past, but they are sorted now.

All PDO queries are unbuffered, which is incidentally why there’s no method for retrieving the number of results from a select query. I don’t know much about the mysqli extension, but I’m not surprised that it would support more features, as PDO is designed to be portable across databases.

Maybe I should have put my question this way: Is there any reason to use PDO over MySQLi if you only work with a MySQL database? (MySQLi seems faster, has more features, has been in php longer(more stable?), will get the new MySQLnd-driver, etc.). I really wish there would be more information in the manual (or atleast somewhere) about this stuff, If php is offering three (soon to be 3½ with MySQLnd) ways to talk to MySQL there should be some type of comparsion between them you’d think, somewhere.

The main reason to use PDO is that using prepared statements and parameter binding ensure that your queries are verifiably secure. You don’t have to use string escape functions or any other such rubbish, so in a way you have built-in security without having to think much about it.

Another advantage of PDO is that you can reuse prepared statements. Bind some parameters, execute the query, bind some different parameters, execute it again, rinse and repeat. That saves a lot of effort if your application uses the same queries with different parameters over and over again.

What I like best is that you can use PDO in try/catch blocks, which makes your code a heck of a lot cleaner. No more “if (mysql_whatever_fails()) {cry_about_an_error();}” fifty zillion times throughout a simple query process.

MySQLi also has prepared statements http://se.php.net/manual/en/function.mysqli-stmt-bind-param.php

Well, I would wrap pdo or mysqli in my own class anyway, so that doesn’t really matter.

That’s pretty much what I do too. In the absence of any hard figures to support whether one is faster than the other, it really comes down to an aesthetic preference and how much you enjoy typing. I prefer object-oriented code, and I always hated having to write “or die(mysql_error())” at the end of every line of code. It’s just a hunch, but I’d guess that catching all of the errors in a single catch block has to be more efficient, mainly because there’s less code to parse.

Bind some parameters, execute the query, bind some different parameters, execute it again, rinse and repeat. That saves a lot of effort if your application uses the same queries with different parameters over and over again.

What about stored procedures and views with parameters?

> … ensure that your queries are verifiably secure.

Nothing is foolproof and 100 percent secure, to be sure of it. I am not saying that you shouldn’t put some trust in PDO but what I am saying is that you shouldn’t leave anything to chance.

Are you so sure of yourself, that there may not be an unknown bug in PDO that is just waiting to be found in the future? A bug is a bug, and yes it can be patched but not everyone is going to be aware of that patch, nor is everyone going to care.

There are still people out there running WP on versions that are in need of patches to secure the said application against attacks even today.

> … so in a way you have built-in security without having to think much about it.

That is just another false economy that you’ve gone and described. Never make the -beep- assumption that something is secure. And yes, you do need to think about it; It should be the cornerstone in how you not only design your application but how you apply it, more so third party libraries, etc over and above your own.

Tell me you don’t actually develop for todays internet, please?

> but I’d guess that catching all of the errors in a single catch block has to be more
> efficient, mainly because there’s less code to parse.

What are you talking about? Got nothing to do with the amount of script required to be parsed, but it has got everything to do with basic, common sense good practices and the need to maintain your application in a more structured manner.

Does the quote button not work for you, or is the “old school” style of quoting part of your ubergeek shtick? :rolleyes:

I’ll disregard your previous post in its entirety on account of the boorish (but utterly predictable based on past experience with present company) ad hominem attack therein and reply only to this one. I will charitably refrain from suggesting that your social skills might improve if you left the basement once in awhile and tried socializing with people in the real world. My advice to you is, do never flame or troll me again unless you want to be severely, mercilessly, and brutally pwned.

Back on topic. The thread’s author mentioned benchmarks in his initial post, from which one might reasonably infer that speed and performance are among his concerns. That is the matter I addressed. Supposing that an extension that lends itself well to a procedural style of coding (and yes, I do realize that mysqli provides for OOP-style usage, although, in my opinion, it leaves much to be desired) is faster and more efficient than an extension designed for OOP coding, it is my contention that the overhead of parsing the extraneous (and, I believe, unnecessary) procedural-style error handling code might offset or even completely negate such benefits. That is all I was saying, really.

But since you brought up the need to maintain one’s application in a structured manner, am I to infer that you think handling individual errors as one goes along exhibits better structure than trying a functional unit of code and then catching any exceptions if the unit fails? Or would you say that one approach is neither better nor worse than the other, and that it is solely a matter of preference?

You’ll have to excuse me but what does my enjoyment of typing having to do with this? Or are you suggesting that using a procedural database interface requires more typing then a OO-one?

The parsing speed is such an abysmal part of total execution time I hardly see that as a variable to take into account when selecting my database interface, also I have a rather good hunch that OO-code is a fair bit slower then procedural code in php (even though it did get a performance boost with php5 afaik) considering that much of the object model was tacked on later (php5, etc.).

Yeah, I was wondering this to - I know MySQLi supports this, how about PDO? Anyone got a clue?

As I stated earlier, OO code is generally slower then the same procedural code in php, OO code also tends to (in the end, not in short examples) be bigger (in amount of characters), but as I said - the parsing speed is such an abysmal part of the entire execution in time that it doesn’t really matter.

I don’t get this obsession with try/catch, since I will wrap either MySQLi or PDO in an object of my own, all errors reported from MySQLi will of course be thrown as exceptions of my choosing as all PDO exceptions will be caught within my wrapping class, re-formated and re-thrown.


Just implemented a small to medium accounting system in MySQL - about 50 tables and 160 stored procs and a couple of dozen views.

We actually switched from PDO to MySQLi. The error messages in PDO are broken for MySQL. You get very misleading reporting if newer MySQL versions throw an error PDO doesn’t understand. Given how often MySQL throws errors, this is a real problem.

Also, PDO is not that portable. You have so many constants to set up with the different DB engines, that you end up wrapping the DB connection/transaction anyway.

One big advantage though, is that you will have to learn less when you switch DB.

That’s what I thought. My experiences with trying to do serious DB work with MySQL have been so appalling that even at this late stage we are considering switching. Yep, rewriting 160 stored procs is preferable to this damn thing. We actually cancelled our Gold support contract with them, we were so frustrated.

Problems include:

  1. I can crash the MySQL client in six lines of code, unless you start it with --no-auto-rehash.
  2. On a 400 test test suite I get random errors about 25% of the time without transactions. That is, you get strange timing behaviour on 1/1600 queries. I had to wrap everything in a transaction.
  3. Silent transaction dropping, which is especially a problem on migrations scripts :eek:.
  4. Cannot recursively call procs.
  5. Optimiser very confused by views. Rather unfortunate as that is our main way of refactoring repeated queries.
  6. Pathalogically slow on common queries, especially in() and non-correlated subqueries (the optimiser thinks they are correlated).
  7. Painfully slow cursors (we dropped them all anyway, so this is moot).
  8. Broken OR clause! I kid you not :(. Had to rewrite OR queries with unions. This actually speeded them up. Uh?
  9. Procs break replication every which way. Text based replication just doesn’t work. Fixed in 5.1, but they just downgraded that to “alpha” :(.
  10. No signal statement. Makes it a bit difficult to do any error handling at all. We had to emulate it by doubly writing to a unique field. That was a problem for replication. Just as well it wasn’t working anyway…

In short, use PDO…with Postgres.

yours, Marcus

Thanks for your reply Marcus.

  • Did you notice any difference in execution/query speed between PDO and MySQLi when you made the switch to MySQLi ?
  • I’ve been looking around for a comparsion between MySQL and PostgreSQL, both in terms of features, speed, ACID-ity(is that even a word?) - do you know of any?

Interesting. We’ve observed most of those issues as well. Although we don’t use stored procedures, so I don’t know about that part. I realised the problem with UNION vs. OR last week; That’s really … well … strange?

How serious are you switching? I mean, MySql is supposedly a lot faster than PostGres.

MySQL is mostly faster on Selects, when using MyISAM. InnoDB is slower, and for writing a lot of data, Postgres can be faster. I’ve also heard that Postgres scales better over multiple processors.

Having used both, I’d say that performance issues were pretty negligible, although the client I had that used postgres had the resources to properly set it up and tune it. Integrity and feature-wise, by all reports I’ve seen postgres is definitely the better choice. The main issue is ease of use; a lot of that power comes at a price. Sequences are definitely more flexible that MySQL’s simple auto-increment, but much more of a hassle. Ease of use isssues are also compounded by the available admin tools; phppgadmin is pretty poor compared to phpmyadmin, although I imagine a proper desktop tool would make things easier.

Broken OR clause! Had to rewrite OR queries with unions.

Ummm, could you post this SQL? Thanks.


No, but we made no serious attempt to measure it. Most of the time taken in the test suite is from a few larger queries which swamp everything 80/20 style. The connector performance might have been swamped by this also.

I have found some nice ones comparing Mysql5 with postgres on the postgres and bizgres sites. Sorry, cannot track them down right now.

From word of mouth (InnoDB only): Performance wise, Mysql connects fast and is fast at simple queries. Postgres is better at full table scans and complex queries, and is heavily favoured by the OLAP community. Greenplum use it for example. As for general use, the worse complaint I’ve ever heard against Postgres is that it can be slow on joins of more than five tables. As you can emulate realised views using triggers (although I’ve not done this) there are probably ways of factoring out even this problem. I’ve heard far more complaints against Mysql.

yours, Marcus


We keep thinking we are through the worse, and then it spites us again. We are close to a deadline right now, so we will probably roll with it as is, and hope Mysql 5.1 solves the replication issue. I’ve assigned someone to investigate a script to port the procs (TalenD can hopefully port the data and schema).

I would never use Mysql again for a fully DB app. though.

yours, Marcus