In case anyone comes across this in the future, this is what I ended up doing to sort this out:
I have a script that scans the directory for new files, reads the XML data and inserts it into a mySQL database. It then moves the file into another directory, so it won't be processed again. The script runs as a cron job that I set up through the site's cPanel.
While the script is reading the XML file, it checks to see if the file's unique id is in the database already. If it is, it compares the datetime in the database to the XML file's datetime. If the XML file is newer, it updates the existing database record, if not, it does nothing.
I then use the database throughout the site as I usually would, for displaying records, search functions, etc.
I had already thought of this as a solution to the problem myself, but it seemed a bit strange to me - why use XML files at all if the information is just going to go into a database? Why not just skip the middle man? I still don't know the answer to that (any ideas, anyone?) However I got confirmation from one of the developers who works on the CMS that produces the XML files, and he told me this method is how most people use their system, so I went ahead and implemented it with confidence.