I've built a site that is storing sessions into a MySQL table and I've been encountering some errors with updating/inserting session records. For the most part the sessions are working. I've built in some error logging and reporting functions and on occasion, I get an error report about a duplicate entry while trying to insert a new record. I'm using mysql_affected_rows to determine if the update succeeded and then based on it's success or failure, it either returns or inserts a new record.
I guess what I'm wondering about is the scope of mysql_affected_rows. Let's say I've got more than one person using the site simultaneously and their actions are resulting in queries to the database. Does mysql_affected_rows only ask about the last transaction to the database (in general) or does it look at the last transaction to the database within the scope of the currently executing script block?
Following is my SessionWrite function:
I know I can specify the result resource handle in the mysql_affected rows call like this -- mysql_affected_rows( $session_res ) -- but that consistently produces an error if the session does not already exist.
function DB_SessionWrite( $ses_id, $data )
$session_sql = "UPDATE sessions SET ses_time='" . time() . "', ses_value='$data' WHERE ses_id='$ses_id'";
$session_res = @DB_Query($session_sql, true );
if ( !$session_res ) return false;
if ( 0 < mysql_affected_rows()) return true;
$session_sql = "INSERT INTO sessions (`ses_id`, `ses_time`, `ses_start`, `ses_value`) VALUES ('$ses_id', '" . time() . "', '" . time() ."', '$data')";
$session_res = @DB_Query($session_sql);
if (!$session_res) return false;
} // END FUNCTION
NOTE: The statement -- if ( !$session_res ) return false; -- only returns false if there was an error with issuing the query altogether and is not a test to determine if rows were affected or not, thus the mysql_affected_rows statement.
On the whole, the functions works properly (most of the time), that's why I'm thinking that mysql_affected_rows is asking the MySQL daemon how many rows were affected by the last query issued to the daemon, not necessarily the last query issued by the given instance of the running php script. And therefore is only a problem when there are multiple simultaneous users.
Can someone help clarify this for me and/or perhaps offer some suggestions as to how this could be done differently? I guess I could throw in a third query (SELECT) to see if a session already exists, but it's more queries and I would like to keep it as efficient as possible.