After looking for the best way to handle some “multi threading”, writing some of my own code, I actually happened upon a nice little class, one I’ll be sharing in full once I have fixed it’s obvious mistakes.
The guy who wrote this took excellent care with his multi threading portion, its absolutely beautiful, except he could have gone farther with it. He even wrote in some sh_mop classes to help out, which unfortunately are lacking, here’s where I’m at with the alterations so far on his shared memory class:
private $shmId;
private $readCache = NULL;
public function __construct() {
if (! function_exists('shmop_open')) throw new Exception('shmop functions not available on this PHP installation');
srand();
$memoryKey = ftok(__FILE__, 't').getmypid().rand();
$this->shmId = shmop_open($memoryKey, "c", 0644, 5000);
}
public function set($key,$value) {
$data = $this->readFromSharedMemory();
$data[$key] = $value;
if(!(boolean) shmop_write($this->shmId, serialize($data),0)) {
die("Failed to write");
}
}
public function get($key) {
$data = $this->readFromSharedMemory();
if (!isset($data[$key])) {
return false;
}
return $data[$key];
}
public function getAll() {
return $this->readFromSharedMemory();
}
private function readFromSharedMemory() {
$dataSer = shmop_read($this->shmId , 0, shmop_size($this->shmId));
$data = @unserialize($dataSer);
if (!$data) {
return array();
}
return $data;
}
public function close() {
@shmop_delete($this->shmId);
@shmop_close($this->shmId);
}
I added in the getAll() function, as well as removed an exception that would break the script if a key wasn’t created yet, it now returns false and the developer can handle how we would like.
What I’m running into with this approach, is that because he reads the whole section of shared memory, makes the change to the whole array and then places the whole thing back in, we have some clashing between processes hitting at the same time. The more threads the more you notice. I wouldn’t expect a perfect system except when it comes to the end of the scripts, I have a benchmark class which returns each threads current status, but some seam to be “incomplete” because their data was overwritten at the same time.
Another issue here is that line 7 has a static size for the shared memory,initially he had it set to 100 bytes, which isnt room for much of anything. The bigger it gets though, the higher the overhead. I’m thinking of adding in some sort of function to destroy the shared memory and then recreate with a larger one when needed.
I think it might be worth it to create separate sectors for each thread to write to to avoid clashing. What are your thoughts on both of these situations?