Please check this mysql function

I’m working on a function to make sure mysql queries are retried if there’s a problem. Anybody see any problems or ways to improve this?

<?php
$hostname = "localhost";
$database = "website";
$username = "root";
$password = "password";
$conn = mysql_connect($hostname, $username, $password);
mysql_select_db($database, $conn);

define('DBHOSTNAME',$hostname);
define('DBUSERNAME',$username);
define('DBPASSWORD',$password);
define('DBDATABASE',$database);

function dbquery($query_string) {
	$retries = 0;
	$max_retries = 2;
	
	do {
		if(!$conn) {
			// we lost the db connection. reconnect.
			mysql_connect(DBHOSTNAME, DBUSERNAME, DBPASSWORD);
			mysql_select_db(DBDATABASE);
		}
		
		// run the query
		$sql = mysql_query($query_string);
		
		// check the return value to see if we had a problem
		if(mysql_errno($conn) != 0 ){
			// send an email or print to a log so we know something happened
		}
		
		// add a small delay here to avoid looping really fast
		// although we want the first retry to be really fast.
		if($retries > 1) {
			sleep(1);
		}
		// if we didn't return anything something must be wrong
		// let's retry the insert while at the same time incrementing $retries
	} while(!$sql and $retries++ < $max_retries);
	
	// we have tried $max_retries times and couldn't do it.
	if($retries >= $max_retries) {
		// print a message to the user
		echo 'sorry';
		// print to log and email administrator...
	} else {
		// successful insert
		echo 'success';
	}
}

// let's test it
$query = "INSERT IGNORE users (usr_uname) VALUES ('reggie')";
dbquery($query);
?>

Why would you assume a query that fails to work the next time you try it?
If an insert fails it would be because an index constraint is not met, most of the time --if you have set up the tables correctly-- this will be because the primary key is already in the database (duplicate key).
The only way this could be resolved is if the record the new record is clashing with would be removed from the table.
It would be a real concidence if the tuple is deleted during one of your retries.

Moreover, “insert ignore” will never fail because of the “ignore” keyword.

Oh, and also losing the connection to the database while running a script is, well I won’t go as far as to say impossible, but very unlikely at least. If the connection would be lost it would most likely be because MySQL crashed. In that case all hope of bringing the script to a successful end is lost anyway.

Well a bit unlikely but not impossible.

And in this case (and I know of others), it’s because the wait_timeout and interactive_timeout values are set very low on the MySQL to avoid a lot of hanging connections. That’s why the reconnect is there.

Although I agree with that you probably should only test the two error codes CR_SERVER_GONE_ERROR or CR_SERVER_LOST and hand back the error code for the others to the application (in case of if you have duplicate error occurs etc).

this article, posted today, might be somewhat related to the topic…

It’s a very good point!

Even though this routine doesn’t suffer from it since it only checks if the $conn variable is null before it executes the query and that is all handled in no time within PHP. And if the actual connection should be dead then it performs the same steps as the example on mysqlperformanceblog.

(I must say that I’m actually a bit surprised by the mysqlperformanceblog entry that checking the connection is such a “common” practice, but then again we have seen many first-select-and-check-then-insert-race-conditions-code also)

I’m not sure whether PHP would set $conn to null once the connection to the MySQL server would be lost. For that matter, I’m also not sure whether PHP is aware that the connection to the MySQL server is lost.
In other words, it might be that $conn is not null, but the connection is lost anyway, i.e. $conn is still a link resource, but the link resource is invalid.

I haven’t tested this, but it’s something to think about :slight_smile: