Is this locking foolproof against race conditions?

I have created a mechanism that will allow only one instance of a php script to run at the same time - actually the part after “// do your stuff here…”. I wonder if it’s 100% guaranteed that a race condition does not occur and result in two scripts firing simultaneously:

try {

	$lockFile = "script.lock";
	if (!$fp = @fopen($lockFile, 'x')) {
		// another instance already running
		if (@filemtime($lockFile) < time() - 15*60) {
			// after 15 minutes we consider it deadlock, so let's release it
			throw new Exception("Deadlock found and realeased. Current script stopped.");

		throw new Exception("Another instance running. Script stopped.");
	// do your stuff here that should be done only by 1 script at a time
	// ...
} catch (Exception $ex) {
	$message = $ex->getMessage();

According to documentation the attempt to create a file in ‘x’ mode should fail when it already exists so in theory this should work. But can I be sure that the implementation of fopen($file, ‘x’) is atomic? If in reality there is some C code running first a simple check condition whether the file exists and then creating it if it doesn’t then it is possible that this file gets created by two separate threads, the latter overwriting the former. Or is this handled by the file system, which guarantees atomicity?

I’m mostly interested in how this works on Unix systems. I don’t think I am capable of testing this code since it would be extremely difficult for me to start two instances at almost the same time without specialized software, besides I don’t work on Unix apart from using Unix hosting. So I can only rely on expertise of others.

Forgive me for being dense, but I thought that PHP scripts are all executed sequentially, one at a time, and only becomes concurrent under very special circumstances, such as when using the streams wrapper

Perhaps you misunderstood my case because I don’t think you would lack such basic knowledge about php? Of course, when a php script starts then it gets executed sequentially until it ends but you can have many requests to the same script at the same time and then it gets executed simultaneously in multiple threads. The code above is to prevent multiple simultaneous requests to the same script.

The logic seems ok.
I’d use the file_exists function instead of trying to open the lock tho. And I don’t like hiding errors with @.

First I used file_exists but later I changed it to fopen with ‘x’ because with file_exists this would not be reliable. I’d have to do something like this:

if (!file_exists($lockFile)) {
   // create lock file
   file_put_contents($lockFile, '');
   // do stuff...
} else {
  // another instance running, stop script
  // ...

And now it’s possible that two threads simultaneously decide that the file does not exist and both create the new file. Or, do you have a better idea? fopen() in ‘x’ mode with error suppression was the best I could think of.

Is there a reason why you don’t use flock…or is there something else going on?

I am not sure why your trying to reinvent the wheel.

Would it not be easier using the available functionality to handle the locking?

One thing to keep in mind, the locking method you use need to be used across the system. If you have multiple languages accessing the same file(s) you need to be certain all obey the same locking rules.

Edit: Guess I should stop opening topics and reading them in between work without refreshing them before replying. :slight_smile:

My first experience with flock() was quite problematic and besides I wanted a solution that would work cross-platform (at least to some reasonable degree on Windows). And it’s not really complicated as compared to using flock(). Anyway, I’ve done some more testing and flock() seems to work well. And the documentation must be wrong or outdated, because LOCK_NB works on Windows, too - at least on XP with PHP 5.2.4. I have tested this code:

$lockFile = "test.lock";

$fp = fopen($lockFile, 'w+');
echo "fopen<br/>"; ob_flush(); flush();

if(!flock($fp, LOCK_EX | LOCK_NB)) {
    echo 'Unable to obtain lock, script stopped.';

echo "running script after flock...<br/>"; ob_flush(); flush();

echo "end.";

Now my question is - in this example is there a possibility of deadlock? Theoretically, the lock should be released when the script ends but what if the script ends unexpectedly due to some server error? Is it possible the lock will never be released then? If so then what can be done about it to handle it automatically?

Just out of curiosity - what exactly are you trying to achieve, for what purpose? There might be another way by creating a simple server or simulating threading rather than relying on file locks.

In this specific case I have a script that can be run either by a cron job or as a result of user action - its purpose is to send out email messages queued in the database. I simply don’t want the same person to receive multiple messages when more that one instance is running.

I also want to create a generic method (class) that can be used for similar purposes. There can be other use cases I want to use it for - for example a script that creates a new invoice and there’s really a lot other related data that need to be read and changed in an atomic way. Simply using this kind of lock at the beginning of execution to disallow multiple instances may be much easier and much less work than having to deal with locking tables, row locking, transactions, SELECT … FOR UPDATE and similar methods - it can get quite complicated when there are many tables and data sources involved, and pretty hard to test for all possible edge and extremely rare cases that can happen with multithreading.

Have you considered creating a simple server that can handle connections to it (sockets, server listening on certain port) and use queueing system to determine what to execute and what to deny? That way you don’t have to rely on a simple file being there, or having deadlocks due to some random error.

I don’t think it is a solution for me - I need something that will be portable and work on a shared hosting. I doubt any shared hosting would implement anything like that for me. This looks like a good solution for really large sites - that would justify the need to setup another server.

I’m talking about a simple small server written in PHP, it wouldn’t have more than… well, 20ish lines of code - listening on a port, accepting connections and using queue system to determine what to execute and what to deny.
But then again, as you said - probably not the way to go since you’d need ssh access and what not :confused:

I must admit, when I read this thread I too thought a MQ of some sort could quickly remedy this.

The cool kids are using them these days too, so instant geeky admiration is almost guaranteed. :stuck_out_tongue: