SitePoint Sponsor

User Tag List

Results 1 to 22 of 22
  1. #1
    Resident Java Hater
    Join Date
    Jul 2004
    Location
    Gerodieville Central, UK
    Posts
    446
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)

    Automation, deployment etc, and development environments

    Excuse the vague subject for the thread. I don't really know what to call it. To summarize without reading the waffle / motivations behind this, I basically want to know how people go about development in terms to being able to setup some system for automating various tasks, deploying projects etc etc.

    It is interesting to see what sort of practical things people do during the development on projects. I'm looking at trying to put together some system that enables some sort of continuous integration type of approach where testing can be done before an automated upload of what's changed code wise.

    I realize people like Marcus, etc have systems in place for this, such as deploying stuff with RPM's etc.

    I'm keen to see what different things people do and generally discuss / exchange ideas on the subject. I'd like to know what things have worked for others, and if you want to read waffle below, you can see some of the motivations to move away from the old-fashioned way I do things atm. Maybe soem people can suggest better / other ways that what I'm thinking of doing. What I'm after is practical advice here. There are plenty of messages going on the overall theory and why you should use versioning, test suites, and automation scripts, and I've identified why I need to use them below. I'm more interested how people how set these up to work together and the practical side of setting this stuff up.

    <waffle>

    (only read below if you have lots of time / bored :P !)

    Basically, I'm getting to the point where I can not progress further without changing my approach to development in one way or another. Obviously, when I started programming a 2 or 3 years back, my initial approach was to grab a Apache/PHP/MySQL bundle, install it locally on my Winblows box, develop everything locally, finish a site, and upload it with FTP. I'm sure this is what many people did when they started off.

    Obviously, times have changed. I end up developing now under a number of platforms (Mac OS X at work, Windows at home, and we have a few Linux boxes in-between). Neither do I work alone. I've managed to survive just about with no versioning system, mainly because me and my boss work on separate things at a given point in time, and I generally do a large chunk of the coding. Projects have changed too, like many here, projects are more 'continuous' in nature, and therefore the need for testing, and dummy proof deployment is important in order to reduce the risk involved with doing updates. Working both at home and in the office is a push for some sort of rsync/CVS/SVN type system to sync files. Obviously FTP is far from ideal. I tend to occasionally forget to upload some changed files and sometimes go round in circles for 20 mins at a time wondering what's cocked up.

    Anyway, I'm looking at something sort of like a continuous integration type -of environment as stated above. We have a linux box in the office, which we develop on. I'm looking to import bits into subversion here. Each box we use will probably have Apache / PHP / MySQL installed on, so we can check out from SVN, do what needs do and then upload it back into the main repos. Ideally, I need a system where I can simplify the process so that I can make a number of changes, commit them back to the development server, run some unit tests, and then automatically upload the changes if I'm happy with the results. On top of this, I would like system where we can get our office server to download backups of the database, and rsync any new media uploaded to it during the day. Other tasks need are easy ways to administrate a server (setup a domainname, ftp, MySQL or PostgreSQL DB, and permissions etc all in one easy swoop, where we can just specify a few details and the rest is done). Other tasks I want to simplify are things like building PHP. We have a CentOS 4 box here in the office, and RHEL Servers. Ideally it would be nice to start using RPM to deploy PHP/MySQL/etc in a quick easy way. Currently, I do faff about a bit because I end up running all the ./configure && make && make install manually for all the packages tied with PHP etc, which is not as straight forward as it seems obviously when the ./configure stuff is different for each package, along with some packages having slightly different make targets etc. Obviously, I need to educate myself more about building .spec files and other bits tied to rpmbuild.

    I really have put off doing proper unit testing for too long now. I often end up slapping the odd var_dump, or bit of test code (lie setting a variable here and there to a set value). This is not ideal as I have once or twice accidentally left this code in and uploaded it to a production site. The use of separate tests would separate this. Another thing that's come to mind is to have a DEBUG constant, and wrap any var_dumps etc i do use in "if (DEBUG)" blocks to save them running on the production server if I forget to clean out any bits of code like this.

    </waffle>

    Anyway I hoped I've explained myself OK. If you've read this far then congratulations!

  2. #2
    <? echo "Kick me"; ?> petesmc's Avatar
    Join Date
    Nov 2000
    Location
    Hong Kong
    Posts
    1,508
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Short post...

    I think this will be of help to you: http://www.edgewall.com/trac/

  3. #3
    ********* Victim lastcraft's Avatar
    Join Date
    Apr 2003
    Location
    London
    Posts
    2,423
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)
    Hi.

    We've had quite a few threads on this lately, so I would cross link them before you get a whole bunch of repetition. E.g...
    http://www.sitepoint.com/forums/showthread.php?t=257636
    http://www.sitepoint.com/forums/showthread.php?t=134487

    I would recommend the pragmattic bookshelf set...
    http://www.pragmaticprogrammer.com/bookshelf/

    The CVS book is excellent.

    You also want various methodology books to help you interface with the business. Try the Scrum process, especially the backlog system...
    http://www.amazon.com/exec/obidos/tg...books&n=507846

    yours, Marcus
    Marcus Baker
    Testing: SimpleTest, Cgreen, Fakemail
    Other: Phemto dependency injector
    Books: PHP in Action, 97 things

  4. #4
    SitePoint Enthusiast mjlivelyjr's Avatar
    Join Date
    Dec 2003
    Location
    Post Falls, ID, US
    Posts
    92
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    We (my company) are in the process of looking for automated deployment solutions as well.

    One of the key factors for me (like you mentioned) is the ability to easily automate all aspects of deploying a site. Getting latest version, establishing testing environment setting up database, creating file structure, setting permisions, testing and finally moving the site into the production server.

    I am an avid supporter of freeBSD and as such I am fairly comfortable with solutions like CVSup and of course 'make'. In my mind, the perfect solution for me would be a version of these two programs that were changed/optimized specifically for the deployment of websites and databases.

    In the process of looking for such programs I ran across these two libraries:

    Phing: http://phing.info/wiki/index.php

    Rephlux: http://rephlux.sourceforge.net/

    Phing is basically a PHP port of Apache Ant. It does almost anything make can do, from what I understand, once you get use to how it works it gets very easy to build automated install tools with it. This is honestly half the battle. Once you have the ability to automate an install (complete with testing, database setup, and filesystem setup) then the only thing left is tying that in with a way to pull the most recent copy of your app (using something like cvs, svn, or even pre built tar balls.)

    Rephlux is a little (okay a lot) more geared towards continuous integration. The only downside to it is that it is still very much in it's infancy and is fairly narrow in the programs and libraries it can support. Nonetheless, the ideas are all tehre and it's only a matter of time before it supports things like svn and what not.
    Mike Lively
    Digital Sandwich - MMM MMM Good

  5. #5
    Resident Java Hater
    Join Date
    Jul 2004
    Location
    Gerodieville Central, UK
    Posts
    446
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Oh, I've heard of track, I'll look into it. I think Hans is now using it for managing Propel.

    I've been a bit put off with Rephlux atm, mainly as I want to try and use SVN, seeing as it has a smaller learning curve from CVS and everyone seems to rave on about it.

    For some reason phing never came to mind. It should have seeing as I have played about with Propel.

    With regards to http://www.sitepoint.com/forums/showthread.php?t=134487 Marcus, I did see that a while back when it was brought up, and was aware of it. However I was trying to be more specific with regards to points 2 and 9 on your main post there. I'm interested in the way people have gone about these sort of things like what stuff they use RPM for and some info (i.e. websites that people have found down to earth of the topics. There are loads of sites on RPM building, but few really cover things like spec files in a nice easy to digest way for busy people who don't have all of the time in the world to refine their skills at being a linux geek).

    I'm keen to try and get a system where I can roll out rpm's easily and quickly for things like PHP / MySQL / Apache / PostgreSQL etc. I often like to grab the source, mainly because there are times when PHP5 has the odd bug (not so much now), and I find by nabbing a CVS build that I get round the issue. However compiling PHP for the first time on a box can be a real pain if you want all the other extensions etc that go with it.

    How do people keep sites syncronised. Atm I'm still using FTP mainly. It's a nightmare from the pov that I often forget to upload the odd file, therefore from time to time I do break the odd feature (or make a new one, depending how you look at things!). At the moment I want a system to regularly backup data from a live site and dump it to the server in the office at 3am in the morning. I'm looking at rsync + cron + ssh with keys for that. How do other people do this. At the same time I often need to upload code in the other direction (office server to production server). rsync is a possiblity here, however I would like an automated system where I can get some tests to run on the local office server and then maybe get it to automatically get the remote server to check out the latest version if no critical tests fail directly from svn.

  6. #6
    SitePoint Guru
    Join Date
    Nov 2002
    Posts
    841
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    I'm using rsync, driven by a shell script, one for each site that I maintain. I've also set up a directory structure to support this. For example, I am using a very minimal footprint inside DocumentRoot, with a very small front controller and media directories that are aliases into my backend directory.

    The script does not update the Document Root directories automatically.
    Each site also has a separate configuration and data directory which is not automatically updated.

    The biggest problems I usually have are updating the database tables. I'd like to have schema diff tools to do this. For example, to compare a schema file in SQL format with another SQL file and generate the SQL that will upgrade from one version to another. We had one at my last job, but I don't know of any freely available tools like this.

    Every once and a while new fields have to be pre-populated with something non-trival. For these, I write a small upgrade program.

    I think I am going to start exploring using a private channel and the PEAR installer for performing the upgrades, with shell scripting for generating the packages from CVS. I need CLI DB diff tools before this can work, though.

  7. #7
    SitePoint Zealot
    Join Date
    Feb 2003
    Posts
    156
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    I wonder if anybody has had any experience with PEAR's: http://pear.php.net/package/PHP_Archive/

    Apparently it can package applications or libraries as a single file to ease deployment. Of course, you still have to take care of db-changes...

    edit: Ok, I missed it when I first read about it, it seems to be only couple of days old:
    http://www.pixelated-dreams.com/archives/131-PHAR-Is-Here!.html

  8. #8
    Resident Java Hater
    Join Date
    Jul 2004
    Location
    Gerodieville Central, UK
    Posts
    446
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Yea, I've just started getting rsync going for saving the bulk of the document root. I've left this going on a cron job, and I do this under a maintenance / backup user. This works well as there is a huge media directory there that is about a CD's worth in size on one site.

    I got round to playing with some way to deal with databases the other day. Currently, I am using MySQL's replication system for this. Due to the asyncronous fashion of this, it seems to be ideal for the job. The real issue I have with MySQL replication is the fact that a slave can only work from one master, and therefore you can not setup a slave instance of MySQL to aggregate a number of different DB's. I'm going to setup a little script to get round this I think, which reads from a table locally that read in a host / user / pass or some key / log position etc for each site. It will issue a a CHANGE MASTER so and so statement, START SLAVE, and then poll every few seconds until the slave is only a couple of seconds behind the master before issuing stop slave, saving the new log posisition and then moving to the next DB.

    MySQL has a bin2sql program with it, that can read it's binary logs from a given log position and convert it back into SQL statements. I'm sure with this you could possibly make a SQL file, which pretty much does what your asking for. Of course I'm assuming you are using MySQL and not something else like PostgreSQL.

  9. #9
    SitePoint Guru
    Join Date
    Nov 2002
    Posts
    841
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Wouldn't it be a bit dangerous to put this on a Cron job? What if it automatically transfers a work in progress, rather than than a "releasable increment?"

    When I was working on ERP stuff, we used an installation script builder. The install builder was integrated with our source code control so that it knew what had changed. Each releasable version got tagged so we could re-create the release at any time.

    We also checked in our database structures and some of our data, so the install builder could do diffs between versions and generate the code to update the database at a plant running our software.

    The installer was then put through a QA process. We had QA people and a testing room with dedicated server and client machines. The testing server could be loaded with data from any of dozens of plants. We also had software that simulated client activity such as might occur at a plant during normal operations - calibrated by network sniffing and database monitoring at actual running facilities.

    We also had a separate dedicated development server and many people had private plant images on their own hard drives. The installation scripts made it easy to set these up.

    Both before and after the installer was run at a plant, a backup image of the production server at the plant would be taken. (Each plant had their own test environment and many independently verified our updates before applying them.) These backups were sent up to us and formed the basis of our testing server installation. Each plant, of course, had independent daily offsite backups.

    I would like to get back to this level with my web stuff. One difficulty is that I don't have a way to take a diff of two different table structures, let alone the data.

    I'm using phpMyAdmin for making structural changes in my test environment, which is also my development environment, but frankly with every release, phpMyAdmin seems to get more difficult to use.

    As part of my automated testing process, each table is stored in two SQL files (text): table.struct.sql and table.data.sql. It would be nice to create installation scripts by using diff on two different versions of these files.

    What would be nice is to have a structure changing utility that was integrated with my deployment process. One that knew that the structure needed to be changed not only in the database, but in the SQL text files that I use to describe it. One that would not only transform the values in a column or create new values upon a structure change, but would generate a script that could do the same change at any time or place.

  10. #10
    Resident Java Hater
    Join Date
    Jul 2004
    Location
    Gerodieville Central, UK
    Posts
    446
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    The replication script in the cron job is to back up the live database on the live server. It replicates the data on the live server back on to the server in the office under a separate database from the testing database. This allows our testing server to have a copy of the live data more less on the spot without a HUGE dump process (likewise we can copy the backup DB and do certain operations on it etc etc). The other good thing with replicating live data to a separate back up database on the test server is that it makes dumps much easier to handle. The idea is that we can get the replication to sync up every night, and afterwards it will run the backup script, which dumps out a SQL dump for that day, and then deletes some of the excessively old SQL dumps (basically a dump is kept for everyday for the last 2 weeks, then after 2 weeks we only keep monday backups and then after several months we only keep monthly backups in order to keep the server uncluttered as you don't refer to excessivly old backups often).

    I assume you thought I was using this replication script the other way, i.e. to replicate the testing server's DB on to the live site. If I any updates to the testing database, I will generally make note of any DDL SQL I do, and update that manually as I need to. This is probably my main weak point as it stands as it's the most prone to human error. However, I'm not going to try and automate that too much because schema changes are rare, even compared to source code updates.

    This means we have a testing database on the office server box, and a copy of the live database which is synced daily. Of course we can sync the live db as we need and it will only take a short amount to time before we have a clean snapshot of the live database for us to do anyhing we need on. This works nicely as large reports can be run locally without stressing out the live webserver.

    Like you, I'm using PHPMyAdmin. I tend to just copy and paste any DDL I use into a text file, so I can copy and paste that on the live server before I rsync any updated scripts etc.

    Like you it does seem the real PITA factor with managing these sort of things is tracking schema changes.

    Maybe it would be a good little project to have a tool that inspects two databases to structral changes and then generates DDL based on the differences. I wouldn't imagine it being a massive operation

  11. #11
    SitePoint Zealot
    Join Date
    Jul 2004
    Location
    The Netherlands
    Posts
    170
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by MiiJaySung
    Maybe it would be a good little project to have a tool that inspects two databases to structral changes and then generates DDL based on the differences. I wouldn't imagine it being a massive operation
    There's a MySQL diff tool written in PHP at http://www.mysqldiff.org/. Not used it myself.

  12. #12
    ********* Victim lastcraft's Avatar
    Join Date
    Apr 2003
    Location
    London
    Posts
    2,423
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)
    Hi...

    Quote Originally Posted by MiiJaySung
    Maybe it would be a good little project to have a tool that inspects two databases to structral changes and then generates DDL based on the differences. I wouldn't imagine it being a massive operation
    These types of operations inevitably involve human input. For example creating a new column with new data, where does that data come from? I think automatic solutions are a pipe dream. Suppose we take a step back and come up with a way to make the current situation better?

    I think the real problem with data porting, and one I have been trying to tackle on and off, is to come up with a data porting language. I mean a full programming language in which you can write your own extensions. Everytime you write a porting script, it feels like you are starting from scratch all over again (and I've written too many by now). Thus your porting costs stay constant. It's difficult to leverage past behaviour. A language would allow reuse of previous changes, especialy if domain specific extensions could be added.

    E.g.
    Code:
    new table Sales
    move column "sales" from Staff to Sales with back key "staff"
    rename "sales" in Sales to "amounts"
    This would keep track of keys, etc. You could now expand the language into...
    Code:
    function split_table($old, $new, $columns) {
        new table $new
        foreach($columns as $column) {
            move column $column from $old to $new with back key $old
        }
    }
    I am not sure if all of this is feasible, but I am attempting this at the object lavel (with O/R mapping). We don't data port enough for me to push the project forward though.

    yours, Marcus
    Marcus Baker
    Testing: SimpleTest, Cgreen, Fakemail
    Other: Phemto dependency injector
    Books: PHP in Action, 97 things

  13. #13
    Resident Java Hater
    Join Date
    Jul 2004
    Location
    Gerodieville Central, UK
    Posts
    446
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    I certainly see you point Marcus, I was thinking on a similar lines that there is no easy system to handle DB changes, even with diff tools when you have a testing database and a live on. The two change asyncronously with on another and therefore there is no tool that will automatically be able to track / merge certain changes in your development / test DB into a production one.

    At the same time, the MySQL diff tool will come in handy. It will be a good way to see the differences in the two databases and make sure that both are more less in sync schema wise when updates are made to the live DB, and therefore help to some degree with updating a live DB.

    It would be interesting to see / know what big companies do like eBay / Paypal etc when they have to do updates to their systems. Companies like that need to operate 24 / 7 flawlessly, and therefore must have very extensive rules for deployment of updates to ensure they don't affect any existing transactions.

  14. #14
    SitePoint Guru
    Join Date
    Oct 2001
    Posts
    656
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    It would be interesting to see / know what big companies do like eBay / Paypal etc when they have to do updates to their systems.
    How about careful planning and writing a script that implements the schema changes on a test server + test database and after succesful result, executing that script on the live server + database at a time that traffic is low? Even those big companies can't have 100% uptime, some downtime is necessary for things like schema changes, however small that time may be.

  15. #15
    SitePoint Guru
    Join Date
    Oct 2001
    Posts
    656
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    new table Sales
    move column "sales" from Staff to Sales with back key "staff"
    rename "sales" in Sales to "amounts"

    I am not sure if all of this is feasible, but I am attempting this at the object lavel (with O/R mapping). We don't data port enough for me to push the project forward though.
    That sounds very interesting. It wouldn't be that hard to write a parser that would parse commands like that and translate it to vendor-specific SQL commands, would it? The problem is, I think, that the language itself would have to be quite extensive to be able to describe a large percentage of possible schema changes.

    Say we could develop something: a program that executes schema changes defined in some language file. The next logical step would then be a program that could create such a file by comparing two .sql schema definitions. I'm not sure if such a thing is possible, because there are theoretically many ways in which you can transform a schema from A to B.

    It would also be nice if something like Phing supported a tool like this

    Do I sense a new SourceForge project here?

  16. #16
    Non-Member
    Join Date
    Jan 2003
    Posts
    5,748
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    It wouldn't be that hard to write a parser that would parse commands like that and translate it to vendor-specific SQL commands, would it? The problem is, I think, that the language itself would have to be quite extensive to be able to describe a large percentage of possible schema changes.
    The number and vendor specifics I would like to think, would be an unknown enitity at the initial phase yes? So assuming this, wouldn't it be better to develop a general purpose schema, and then have a specific scheme for each given vendor, sitting below the general schema if I make sense?

    So, as and when required, you'd build the vendor schema there and then. You could therefore, abstract a general parser as well, and script a vendor specific parser as and when required also no?

    That way, the vendor schema, and parser are both independent of the abstracted parser and it's (general) schema, if I'm making any sense to anyone?

  17. #17
    <? echo "Kick me"; ?> petesmc's Avatar
    Join Date
    Nov 2000
    Location
    Hong Kong
    Posts
    1,508
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    How do you keep a development version and active version in sync? I.e. I develop scripts for my website and modify it on my pc, then manually upload all the affected files. Is there a way to do this automatically, but saying like sync my_website? Also, how to deal with database server changes, url changes etc?

    It seems like it's too much to do automatically, or at least scripts need to be written to change all neccessary data before syncing.

  18. #18
    Resident Java Hater
    Join Date
    Jul 2004
    Location
    Gerodieville Central, UK
    Posts
    446
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by petesmc
    How do you keep a development version and active version in sync? I.e. I develop scripts for my website and modify it on my pc, then manually upload all the affected files. Is there a way to do this automatically, but saying like sync my_website? Also, how to deal with database server changes, url changes etc?

    It seems like it's too much to do automatically, or at least scripts need to be written to change all neccessary data before syncing.
    Umm, well that's exactly what i was trying to get at when I made the frist post. However, I didn't explain very well as I rambled on a lot.

    The issue really is dealling with database changes as you can see above. For simple things, a straight forward rsync works with scripts, though you need a bit more if you have complex permissions to deal with and a number of people working on the project. This is where versioning systems, like CVS, SVN, etc come into play. Likewise it's where deplyment scripts can be handy.

    Anyway, I'm playing about with Trac on our main office server box. It's seems a complete pain in the back side to install, with all the Python stuff it wants installed (template stuff, SWIG + the other svn stuff to compile that links to SWIG, plus all the SQL Lite junk). Anyway I'll finish the rest on Monday. Trac it self looks a nice system.

  19. #19
    SitePoint Wizard Ren's Avatar
    Join Date
    Aug 2003
    Location
    UK
    Posts
    1,060
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    I think something along the lines of Microsoft's SQL Server Data Transform Services would be nice. Which is basically an Ant/Phing for RDBMS.

    http://msdn.microsoft.com/library/de...s_overview.asp

    Im not sure a new language would be useful, after all SQL already does what we want todo, and probably better than some not set based language would provide.

    Code:
    CREATE TABLE Sales(sales ... , staff ... );
    INSERT INTO Sales SELECT sales, staff FROM Staff;
    ALTER TABLE Sales ADD FOREIGN KEY (staff) REFERENCES Staff(staff),
    			RENAME COLUMN sales TO amounts;
    ALTER TABLE Staff DROP COLUMN sales;

  20. #20
    SitePoint Guru silver trophy Luke Redpath's Avatar
    Join Date
    Mar 2003
    Location
    London
    Posts
    794
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by MiiJaySung
    Anyway, I'm playing about with Trac on our main office server box. It's seems a complete pain in the back side to install, with all the Python stuff it wants installed (template stuff, SWIG + the other svn stuff to compile that links to SWIG, plus all the SQL Lite junk). Anyway I'll finish the rest on Monday. Trac it self looks a nice system.
    I also found Trac a nightmare to setup on a Win 2K server and in the end gave up as I couldn't get it working properly.

    Tried again the other week on our Debian box...

    Code:
    apt-get install trac
    And that was about it...hey presto! Still needed some configuration and changes to the Apache setup, and I've yet to get a good multiple-project setup working (need to move over to Apache2 i think but can't afford to bring down another service on the Debian box relying on Apache), but it was so simple. I wish Windows had something like Aptitude...

  21. #21
    ********* Victim lastcraft's Avatar
    Join Date
    Apr 2003
    Location
    London
    Posts
    2,423
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)
    Hi...

    Quote Originally Posted by Ren
    Im not sure a new language would be useful, after all SQL already does what we want todo, and probably better than some not set based language would provide.
    There are two problems with the SQL approach that used to drive me insane. The first is the verbosity of some of the set based contortions. That I can live with. The second problem, lack of extensibility of SQL, is the main problem. I didn't make it clear, but that second function definition was supposed to be user defined. I want something I can build on to stop repeating the same drudge each time.

    Another problem I have is that I tend to use O/R tools for the stuff that changes a lot, and therefore the stuff that needs porting a lot too. As my schema for this is in XML, I am looking at XSLT as the language engine while I try and formulate some ideas. I only tackle this when I actually have a data porting need though, hence it's off and on progress.

    yours, Marcus
    Marcus Baker
    Testing: SimpleTest, Cgreen, Fakemail
    Other: Phemto dependency injector
    Books: PHP in Action, 97 things

  22. #22
    SitePoint Addict pachanga's Avatar
    Join Date
    Mar 2004
    Location
    Russia, Penza
    Posts
    265
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by Selkirk
    The biggest problems I usually have are updating the database tables. I'd like to have schema diff tools to do this. For example, to compare a schema file in SQL format with another SQL file and generate the SQL that will upgrade from one version to another. We had one at my last job, but I don't know of any freely available tools like this.
    What was this tool exactly? Any links to demo/trial version of it?


Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •