The easiest way would be by using md5() or [URL=“http://www.php.net/sha1”]sha1() for generating hash of some dynamically generate strings. For checking the uniqueness you have to store them in a database table and each time you generate new id you check if it already exists or not.
When storing in the db give the column a UNIQUE index, that way instead of checking to see if the hash exists as you insert the hash you check for error number 1062 (duplicate found) - then possibly go back and add a digit/char to the values and try again 'till it succeeds.
I think the code Raju gave will always generate unique codes as he’s using microtime and $i which is going to be unique everytime a hash is being generated, hence there is no need for checking the hash in db at all.
Any processing that uses a hash will need to check for uniqueness since there are an infinite number of possible values that map to any one hash and no matter what method you use for generating the values to be hashed there is always a possibility of finding a second value that maps to the same hash you already generated.
Hashes are guaranteed to NOT be unique once you generate enough of them. It is unlikely though that you will generate the same hash twice until you have generated a lot of them particularly if the original values only vary by a small amount each time - since the purpose of a hash is that a minor change to the original will produce an entirely different hash - you would still need to test if it has already been used but simply restarting the process if inserting the hash into the database fails wouldd be the simplest way of handling it (assuming that you use the value as the primary key - which is appropriate as it is supposed to be unique).
Why not use uniqid(“”, true) and get a 23 character unique code? Just add “-XXX” to the end of your current format and you’ve covered all the characters. Personally, I’d save the value as-is, masking it with dashes only for user display and input.
But in any case you have to check in database. As felgall said, it’s possible to get the same record several times. For example if you will use md5, it’s limited by 32 characters, so there is possibility to get the same record several times.
Using microseconds will not solve the problem of being unique.
The thing is, it’s based on time since epoch. No sequence should ever be repeated. Theoretically. But then again, the problem crops up in .NET with GUID collisions occasionally. There’s also the issue of a user figuring out your schema, and exploiting it. In such a case, the $prefix parameter to uniqid would help minimize that. In any case, it’s another viable option to consider.
The problem is that you create a hash out of the sequence, and you can have several strings which generate the same hash, especially the weaker the hashing method is; like md5.
This means if you generate enough records, you are almost bound to get a collision happening.
You cant compare it to GUID, since there is more variable in play here. Though as you mentioned it is possible to get collisions here as well (note depends on what GUID version your database engine is using).