Often we have this scenario:
- We have our database on the production.
- We realise (or the client requests) schema changes.
- We change locally.
- We need to upload that new schema without loosing data.
For what I've read so far, I notice that we can:
- Use a GUI to make this synchronization.
- Use some sort of subversion to track database changes.
- Create a database table to track changes.
- Use some sort of automatized task when deploying (bash process I believe?).
GUI is a nice approach because it's an easy way of doing.
But is not automated and we cannot revert to previous "versions".
Subversion we can get different versions of our database schema.
But is very complicated to understand and to create a workflow that could comprise the schema on one side, and the data on the other side.
The SQL table approach seems to difficult to understand that I can't even see the good and bad parts there.
The bash automatized task seems to be a very nice approach because, if well configured, it enters nicely on deployment workflow. But it seems that we cannot have control above old versions of our database.
Truth being told, I don't see a SO important need for have the database under any subversion system because, hopefully it's not something that we change every day, like other files on our application.
With all this, I'm inclined to search for a bash process that could, on deployment, do two things:
a) retrieve data from production server. (because it will be there where the most update data will exist).
b) update remote schema with local one. (because it is locally that we will first change or schema).
What do you think? What do you suggest? Is there a standard way for dealing with this (it seems to me) common scenario ?