Results 1 to 25 of 37
Sep 28, 2012, 02:17 #20
Considering there are many more cases apart from pulling the first item from the data source you need to have a very intelligent translator of object access properties and methods to sql so that it remains efficient. Still, even the most intelligent translator cannot optimize everything if you don't tell it up front what data you will need to request later because often it's much faster to get more data in bigger chunks than to request many smaller ones. Therefore from performance point of view what cpradio suggests makes more sense - fetch all data beforehand in one go.
- Join Date
- Oct 2005
- Milton Keynes, UK
- 9 Post(s)
- 2 Thread(s)
Whether you're using joins or separate queries, the DB is going to be the slowest part of the system.
Have you benchmarked that? Most of the time additional queries are faster than joins, especially when sorting is involved or you take prepared statements into account. The number of queries has little effect, if you're running 10 queries that do a PK lookup and all run in a matter of miliseconds it's better than running a query which does 10 joins, a sort and takes 2 seconds to run.
If you are indeed "fetching all the data beforehand" that is potentially far worse performance! You're fetching data you may never need or want.
However, I don't think it doesn't sacrifice some portion of performance which can be important for any large system. For anything small it's sweet to have so much data source abstraction and independence but when a site gets large and datasets expand then it becomes important to tweak or rewrite individual sql queries and at that stage this abstraction becomes a hindrance. And I don't think being able to substitute a relational database to XML files or whatever else you may think of is important (unless you have a specific requirement for this in a project). In a small system you can play with it but with a large database this would be almost insane
Funnily enough, it's those larger systems where being able to join from multiple sources has the largest benefit and practical use because they often do need to connect to external data sources, multiple databases, etc. In fact, it's only in small self-contained systems where you wanted to use a DB specific data mapper because you can be fairly sure no other data sources will be needed. The larger the system, the more features there are and it becomes increasingly more likley that external/unknown data sources are needed.
//Show user's latest tweet
This is an ongoing question of how far we are willing to go with implementing good and flexible OOP at the expense of practical usefulness (performance). I don't think in PHP I would go as far as you but certainly that would be a good exercise in learning OOP techniques. Everyone chooses their own balance.
- no need to define metadata, no XML or other model/relationship configuration files, no other maintenance chores (in my case it's just a question of running one script that will automatically reverse-engineer all my database structure into proper objects)
You are too hung up on a performance issue (which doesn't even exist!) ignoring that, there is always a cost-to-benefit ratio. The cost to performance is nil or almost nil whereas the benefit is huge. Consider testing. Being able to quickly and easily substitute any mapper to use an XML file instead of the database immediately makes testing far easier by removing the database dependency.
A practical example: On one site I worked on, a client had a stock control system used at their physical store which already contained almost all the information we'd ever need about the products. This was a desktop application running on their network that could be connected to as a web service to retrieve the data. By simply altering a couple of lines in the mapper definition all the data could be used in real-time and work with our existing shopping cart system with no changes and no need to store a copy of the data in the database potentially creating syncing issues! Then simply save the data back into the stock control system transparently. All using the existing code that we have used on several entirely database driven sites.