Usually, you wan't/need this behavior. Otherwise you may end up with corrupted/stale/out of state session data. If you need concurrent reads for performance(doubtful, how often do you need thier browser to load two pages which both use sessions simultaneously?) then you can just take care to call session_write_close() manually in those specific scripts as soon as you're done modifying session variables if there's still quite a bit more script execution time to be done.
You're free to start>save, restart>save the same session multiple times per script.
// do stuff, read vars etc...
// do something that takes a long time....
// now update sesson vars with the result
This way you aren't locking out other scripts from reading the session data while you perform some lengthy task(for example, any kind of network IO can potentially take a long time, same with downloads etc...). Of course, if the other scripts that could be running at this moment might need the combined result of the first and second update to the variables though, you wouldn't want to do this, you need them to wait. An example could be if you serve large downloads from a php script, and use session variables to control access to these files. If it takes 5 minutes to download the file, and you don't close the session before you start dumping the data to them, then they won't be able load any other pages which use sessions for 5 minutes, the pages will just hang. So you would start the session>read the vars to authenticate them>close the session>output data. Then if you need to update a session var again, for example to keep track of thier filenames downloaded this visit, you can just start the session again after the data has finished being output so you can update vars.
You almost always wan't this locking behavior. Without it, or if you program without considering the implications of not having it, you're more likely to introduce those kinda subtle bugs that your users complain about that you just can't ever seem to reproduce.