I don’t know what they would do if the read failed before it finished. Scrap the attempt entirely? Use what they managed to get? I imagine they would just try again. In any case as the file can have up to 50,000 url in the urlset I don’t think that’s a major concern.
The optional tags are lastmod, changefreq and priority
I think these are more likely to have an effect than the actual position of a set within the file.
Pure “gut feeling” with no basis on fact, but I think having the url listed in a sequence that reflects a typical recursion would be best. i.e. instead of jumping back and forth from folder to folder, depth to depth, put them in the order a crawl would take.
That means putting the new file where it fits into the tree. Not so easy if you’re editing the sitemap by hand I guess, but AFAIK that’s how the auto apps do it.