SitePoint Sponsor

User Tag List

Page 2 of 3 FirstFirst 123 LastLast
Results 26 to 50 of 63
  1. #26
    ********* Victim lastcraft's Avatar
    Join Date
    Apr 2003
    Location
    London
    Posts
    2,423
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)
    Hi...

    Quote Originally Posted by firepages
    design a sports car then implement it on a farm , conversely , a tractor on the raceway and you see where that logic can breakdown
    I meant at a higher level.

    That was included in "good design". In your example it would mean a tractor designer taking a look at sports car design in order to build a better tractor. There is nothing wrong with studying designs in other systems, it is actually a very good idea. Part of good design is that it gives you the possibility of trading issues against each other. I hardly thing anyone on this forum is going to implement J2EE in a single PHP script. Part of the skill in design is choosing the right tool and that skill is no less apparent in Java designs even though the resulting setup is different.

    Unfortunate buried in these performance comparisons in this thread is a real argument. What OO gives you is enough flexibilty to not repeat yourself. This means that change is easier and overall development time is reduced. The development time is nearly always worse whilst you are cutting the first few classes and performance for small sites will be worse as well. As the site grows or more sites are created you get big wins.

    But you lot all know this, the point is that you are balancing and trading some performance against the more expensive developer time as well.

    Worse case: Your team use a java like PHP library with lot's of stuff you don't need (say you just happened to be familiar with the one you used with Java) and an XML (or Smarty) template system for a six month project. Say it runs five times slower, but you saved two months doing it this way and you expect to save another maintainence month while the system settles down to the final version. Cost of that three months is about $20000! For that cost you can load balance three servers and throw in the licenses for the Zend performance suite.

    This is an extreme case. I have even massively exaggerated factors here in favour of under-design against over-design and this doesn't include later maintanence work, say changing to an external authorisation server, which benefits from the OO investement as well. I just cannot get the "gut the design for performance" argument to add up.

    For anyone looking at web software design I would say, please, please have a look at the Java libraries and try out some of the ideas. It really is a fantastic resource.

    yours, Marcus
    Marcus Baker
    Testing: SimpleTest, Cgreen, Fakemail
    Other: Phemto dependency injector
    Books: PHP in Action, 97 things

  2. #27
    Non-Member
    Join Date
    Jan 2003
    Posts
    5,748
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    For anyone looking at web software design I would say, please, please have a look at the Java libraries and try out some of the ideas. It really is a fantastic resource.
    Too true I'd advocate that it'd be fair enough for a project to run over slightly if it meant that the actual development was going to be better for it myself ?

    Like, what is another 2-4 weeks huh ? Nothing if you get a better project development at the end of the day in regards to design and how easy it'd be to expand later.

  3. #28
    SitePoint Zealot ZangBunny's Avatar
    Join Date
    Jul 2003
    Location
    Mainz, Germany
    Posts
    119
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    I use PHP for professional development and I agree with you on the benefits of a well designed framework/library when it comes to cutting down development time.

    Two weeks ago I joined my current project which is supposed to go live next thursday. The project architect (who'd never heard of Martin Fowler or the Gang of Four before I mentioned them) had decided to make heavy use of PEAR classes and use Smarty, because one of the client's requirements was no PHP in the templates. The client is currently load testing the system on four load balanced, Zend boosted dual Xenon servers. The current setup can't handle more than a few hundred concurrent users without severe lag. Next thursday, the system is supposed to go live on 16 of those servers and will have to handle up to 20,000 concurrent users.

    Maybe bringing in a few extra developers early on wouldn't have been such a bad idea after all.

    On the other side, I'm on an open source project building (yet another) CMS. We want people to be able to run that on cheap hosted webspace, i.e. sharing a server with loads of other people, having lousy performance, a MySQL server that's not on localhost, no script accelleration whatsoever, etc.

    We started out using Phrame and Eclipse and Smarty, but our first draft was almost as slow as Tiki when we ran it on sourceforge webspace. Now we're building our own framework, using AwsomeTemplateEngine and "defactoring" key parts of our system on purpose, because a "well factored" system of classes just doesn't pull it's weight in what we think is a typical PHP application.

  4. #29
    SitePoint Addict
    Join Date
    Aug 2002
    Location
    Ottawa, Ontario, Canada
    Posts
    214
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by lastcraft
    Can you explain what the Wireframes are? I really have no idea of how fusebox works or really what it is. I am just curious.
    Hi Marcus,

    Wireframes (in the context I use them) are a quick way of determining the flow of a website without actually generating code.

    I have v3.99 of this:
    http://sourceforge.net/projects/wireframetool/


    Wireframing is not exclusive to Fusebox. It is applicable to anyone who wants to determine flow before they code. In Fusebox, this helps to make decisions on how Fuseactions and Circuits can be grouped.

    Cheers,
    Keith.

  5. #30
    ********* Victim lastcraft's Avatar
    Join Date
    Apr 2003
    Location
    London
    Posts
    2,423
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)
    Hi...

    Cheers for the link, but I don't have a copy of Cold Fusion . Could you post a screenshot?

    yours, Marcus
    Marcus Baker
    Testing: SimpleTest, Cgreen, Fakemail
    Other: Phemto dependency injector
    Books: PHP in Action, 97 things

  6. #31
    Non-Member
    Join Date
    Jan 2003
    Posts
    5,748
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    ... but I don't have a copy of Cold Fusion ...
    Wouldn't worry too much about that IMO it's basically an over rated Mark Up language passing it's self off as a server-side technology

    Plus it's expensive

  7. #32
    SitePoint Addict
    Join Date
    Aug 2002
    Location
    Ottawa, Ontario, Canada
    Posts
    214
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by lastcraft
    Hi...

    Cheers for the link, but I don't have a copy of Cold Fusion [img]images/smilies/frown.gif[/img]. Could you post a screenshot?

    yours, Marcus
    That is really weird... to be honest I haven't been to the SF site in a VERY long time, but I have a PHP version right here on my HDD... if you want I can email it to you, it is like 40k zipped...

    Cheers,
    Keith.

  8. #33
    SitePoint Enthusiast
    Join Date
    Aug 2003
    Location
    Watford, UK
    Posts
    62
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    This is getting way OT, but I've found wireframes really, really useful. They allow us and our clients to really rapidly prototype - once it's done and signed off we both know exactly what they're going to get. Splitting the functionality from the design is very effective in getting the client to think about how their app should work.

    (the we is my company, not the royal (or zeldman) we)

    We started off producing wireframes similar to those on the IAWiki in Dia, but soon switched to using plain unstyled (but well structured) XHTML. This allows the client to see the app 'working' before we've built anything and avoids us building the wrong thing (even if it was actually The Right Thing...)

    I've just had a quick look at the Fusebox stuff and their FLiP methodology seems very similar to what we've arrived at:

    http://www.fusebox.org/index.cfm?&fu...hodology.steps

    This article gives some more coherent reasoning (than mine, not fusebox) for using HTML:

    http://www.boxesandarrows.com/archiv...nd_no_pain.php

    It suggests using dreamweaver or similar, and I guess that could make the whole thing even faster. I'm slightly wary of WYSIWYG tools myself (emacs + psgml-mode = v. fast), and there's maybe an argument that using dreamweaver could lead to concentrating too much on the look of the app. IMO the screenshots in the above article illustrate this perfectly - I prefer the client to have something that everyone can agree is ugly - that way they can forget about the looks and think about the functionality. The wireframes should be about how it works, not how it looks. But however you markup, I've found it a very worthwhile practice for web apps.

    We contract out graphic design - the signed off wireframes can be passed on to a designer to work on while we're busy developing the app. The wireframes, along with some delivery guidelines, make life easier for us and for the designer.

    We use CSS for layout these days, so our plain (but well structured) XHTML wireframes can be transformed at the drop of a hat when we receive the final design - another bonus in that we don't have to redo the markup. We provide a hook to a stylesheet in the wireframes so that our designers can work directly on the CSS if they want, but so far we haven't had much uptake on this - hopefully this will change soonish, making our cossetted lives even easier.

    Anyway, thats enough of my rambling - wireframes are good like cheese is good!

    Cheers,

    Jon

  9. #34
    SitePoint Enthusiast BDKR's Avatar
    Join Date
    Sep 2002
    Location
    Clearwater, Florida
    Posts
    69
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by Selkirk
    Now that computers have become fast enough that OO business programming languages can shed the yoke of memory management, perhaps the time has come for the dominant language to shed the yoke of restrictive type systems?

    Ruby versus Smalltalk versus Objective-C versus C++ versus Java versus Python versus CLOS versus Perl5
    It's good to hear someone mention Objective C. It's the direction I'm going in now and one that I feel is a viable contender in the future for the reasons you mentioned above.

    Not to mention that I have the same last name as Brian Cox.


    As far as talking about CRC cards, isn't that normally associated with Responsibility Driven Design? In my opinion, the CRC card isn't a needed part of RDD, but it helps and is often the tool used to teach the RDD idea to students. That idea being that various portions (objects or modules) of a program can be identified up front based on responsibility. I think this is a key step in helping reduce bloat! Many times, you come across objects (or should I say classes?) that contain methods that aren't likely to be used. That are seldom used if at all. As a result, you get the overhead of parsing larger objects without the justification of using them in totality. This is the reason that I feel that the Eclipse libraries make so much more sense for what most people are doing on their web sites. It's rather light weight compared to something like ADOdb.

    Now I'm not putting ADOdb down. Not at all. I think John Lim is one of the strongest PHP guys out there! It's just that ADOdb is more than most people creating sites will ever need.

    On the other hand, if you do need the functionality it provides, it's a much better performing option than PEAR, Metabase, and PHPLib.

    Cheers,
    BDKR
    If you're not on the gas, you're off the gas!

  10. #35
    SitePoint Enthusiast BDKR's Avatar
    Join Date
    Sep 2002
    Location
    Clearwater, Florida
    Posts
    69
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by ZangBunny
    On the other side, I'm on an open source project building (yet another) CMS. We want people to be able to run that on cheap hosted webspace, i.e. sharing a server with loads of other people, having lousy performance, a MySQL server that's not on localhost, no script accelleration whatsoever, etc.

    We started out using Phrame and Eclipse and Smarty, but our first draft was almost as slow as Tiki when we ran it on sourceforge webspace. Now we're building our own framework, using AwsomeTemplateEngine and "defactoring" key parts of our system on purpose, because a "well factored" system of classes just doesn't pull it's weight in what we think is a typical PHP application.
    This is concern of mine too. It seems that we are far too worried about dedication to the paradigm. Dedication at the cost of performance. While that may make us look cool as developers and stroke our egos, what does it do for the client?

    One thing real fast: Are you really 'defactoring' parts of the system, or just stepping away from a more pure OO implementation? What exactly do you mean by 'defactoring'?

    Also, have you read some of the stuff that John Lim has posted concerning the use of OO in PHP? Check out the link below.

    http://php.weblogs.com/discuss/msgReader$537?mode=day

    The one thing that is strange to me is that the number of methods in a class doesn't seem to have an affect on performance. Only the length of time needed to parse. Hmmmm????

    Cheers,
    BDKR
    If you're not on the gas, you're off the gas!

  11. #36
    SitePoint Zealot tezza's Avatar
    Join Date
    Aug 2003
    Location
    Australia
    Posts
    155
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by BDKR
    This is concern of mine too. It seems that we are far too worried about dedication to the paradigm. Dedication at the cost of performance. While that may make us look cool as developers and stroke our egos, what does it do for the client?
    Indeed a worthy point to keep in mind. On the other hand, some killer solutions would never have been thought of without [temporarily] shedding the yoke of performance being the imperative. I think in truth, there is a lot of cross-polination going on between good design and good performance.

    Sometimes good design comes at the expense of good performance, but then everything is a trade-off. It's up to you to choose where you do the trading for your circumstances.

    I think a good approach is to strive (within reason) for architectural perfection, and then denormalise the code afterwards. This is common practise with database design. Sometimes tables are denormalised for efficiency, but only after the ideal design first came about.

    This isn't always the best approach though, particularly when the foundation of the design is based on a performance bottleneck. So it works both ways. You just have to keep the performance issues in the back of your mind when striving for that perfect architecture. This is the defference between an experienced developer and an amature.

    The other question you have to ask yourself is... "can your client afford the increased cost of developer time for badly designed code?" Also what about the hidden costs dealing with consequences of bugs the the bad architecture is responsible for? - if indeed there was a trade-off between good design and performance.

    The design phase is the smallest part of the total application lifetime (if the thing is of any use at all). Maintanence is a larger slice of the total cost of ownership (TCO).

  12. #37
    SitePoint Zealot tezza's Avatar
    Join Date
    Aug 2003
    Location
    Australia
    Posts
    155
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by ZangBunny
    The current setup can't handle more than a few hundred concurrent users without severe lag. Next thursday, the system is supposed to go live on 16 of those servers and will have to handle up to 20,000 concurrent users
    what sort of caching schema have you tried?

  13. #38
    ********* Victim lastcraft's Avatar
    Join Date
    Apr 2003
    Location
    London
    Posts
    2,423
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)
    Hi.

    Quote Originally Posted by tezza
    I think a good approach is to strive (within reason) for architectural perfection, and then denormalise the code afterwards. This is common practise with database design. Sometimes tables are denormalised for efficiency, but only after the ideal design first came about.
    Darn. That was the paragraph I wanted to write. Too slow .

    "Defactoring" is an excellent term and I think we are groping towards something very interesting here (in the way of the C2 Wiki). I am going to hijack it for a second and define it as "selectively reintroducing some repetition for performance". This brings it in line with denormalisation.

    The cost of this is clear. Any later work will involve making multiple copies of the same change in the code. Now this is really bad news and so has to weighed really carefully against other solutions. This is also the reason such defactorings are left to the last minute, as the author certainly doesn't want to cripple the development process at such an early stage. Unfortunately the project had better be stable or later maintainers will not be so lucky.

    This is not the same as removing unnecessary design. Stripping away the "grand scheme" in favour of something simpler is refactoring up until it starts to hurt in later development time. The point of refactoring is to cut later costs by investing time in the software design as you go. I cannot think of a sane reason to defactor except for unskilled developers down the line or performance (any more suggestions?).

    The performance type of defactoring can be accomplished with code generation. If you are writing twenty classes that are very similar, put the differences into into your own format (say an XML one) and use a tool to generate the actual class code from templates. We do this with web pages after all. You can either do this within the package or ship the package prebuilt.

    Unskilled developers, or extenders that will not have time to learn the system, need to be dealt with slightly more carefully. Either use the generation gap pattern where the generated classes can be extended and then reintroduced into the system, or give them a base class and let them add extensions as "plug-ins". Both are tricky to get right and have downsides. You will have to write tutorials, at least.

    Please look away now...

    I haven't yet said anything about the "pure OO" argument .

    PHP is not pure OO because not everything is an object. This forces you to make design choices about smaller objects in the system. I don't consider this an advantage, but the lack of strong typing mitigates against this a lot. For example for collections (stacks, queues, lists) you can just use a hash. The cases where you want more (trees, lazy loading, caching) are less common and you usually know about them ahead of time. If there is any doubt at all though, I use an object.

    The top level activity of PHP is a script selected by Apache. It even starts in HTML mode, so at least the top level of the app will not be OO unless you only see a framework. Either way that bit is not pure OO. What else?

    What OO that PHP has is a superset of procedural code and so it is always possible to migrate procedural code into OO code. Should you go the other way?

    Easy refactoring requires encapsulation. Avoiding duplication requires polymorphism. Going back to procedural code robs you of the ability of coming up with any design, never mind a well factored high performance one. If someone suggests "unfactoring" in this way they must obviously delight in constant challenge. Perhaps they should saw a leg off and poke one of their eyes out as well.

    Is performance worth it?

    From the client side no. Suppose you double the speed of your script just with minor code changes. I am not talking real stuff here, such as caching data or keeping file handling down as that would involve design changes, I am just taking about code smithing. Now you would have to be a pretty determined hot shot to achieve this accross an entire project so well done. I now request a page from a well connected country, say Japan. I have 30 to 300msec for the DNS look-up, 2 secs to start getting the page and a 2 sec wait for the first byte to come back. I find that the page needs a stylesheet and so another 4 secs passes before I see anything meaningful. How much did you save? Nothing. All that happened was that the core script was sent out maybe 20msec faster whilst the styles and images were loading (you streamed the page whilst processing it because you were worried about performance, right?).

    OK, what about server load? When was the last time we actually measured it across all components in the system? Must be important right? What percentage of the load is Apache? What part of the Apache load was PHP execution time? How much reading the PHP file itself? Did we manage to save a 10% load? Or did we skip that measurement and just buy a bigger server?

    OO languages are a design enabler in the same way as declaritive languages, functional languages and domain languages are. Dropping back to procedural coding to chase performance phantoms is pretty lame. We've moved on.

    yours,Marcus
    Marcus Baker
    Testing: SimpleTest, Cgreen, Fakemail
    Other: Phemto dependency injector
    Books: PHP in Action, 97 things

  14. #39
    SitePoint Zealot ZangBunny's Avatar
    Join Date
    Jul 2003
    Location
    Mainz, Germany
    Posts
    119
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    When I say "defactoring" I don't mean anything as drastic as dropping back to using procedural programming. I mean the littler defactorings such as "Replace Query With Temp" (especially if the query is against a database or a SOAP server), or "Replace Data Object With Array" if it is only dumb data, or simply using the result of a mysql_query() to mysql_fetch_array() instead of constucting a collection of data objects that I can walk through with an iterator.

    In short, I use common PHP idioms where I find them more appropriate than clumsily trying to duplicate Java idioms in PHP. Considering some other PHP/OOP idiosyncracies, I find it convenient to think of
    PHP Code:
    $result mysql_query($sql);
    while (
    $row mysql_fetch_array($result)) {
    ... 
    as just another way of expressing what really is
    PHP Code:
    $my_collection =& new ObjectCollection($sql);
    $my_iterator =& new ObjectCollectionIterator($my_collection);
    while (
    $my_iterator->isValid()) {
        
    $my_object =& $my_iterator->getCurrent();
        
    $my_iterator->next();
        ...

    And easier to understand for the average PHP programmer too, because the former is a PHP idiom, the latter a translated Java idiom.
    In his documentation for the Eclipse library, Vincent suggests the following idiom:
    PHP Code:
    for ($it->reset(); $it->isValid(); $it->next())
       {
           
    $object =& $it->getCurrent();
           
    doSomethingWith($object);
       } 
    IMHO just another way of expressing the same thing and I need to squint at it even longer than the first two idioms to understand what is happening.

    (As usual with my posts, this one lacks a concise conclusion or even a climax at the end. I do this just so you know it's really me posting, not just some impostor. )

  15. #40
    Non-Member
    Join Date
    Jan 2003
    Posts
    5,748
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Dropping back to procedural coding to chase performance phantoms is pretty lame. We've moved on.
    Well said

    Object Oriented Programming Rocks folks...

  16. #41
    SitePoint Zealot Xia's Avatar
    Join Date
    Aug 2003
    Location
    Belgium
    Posts
    189
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Anyone know how much faster Object-Oriented scripts will run on PHP 5? any benchmarks available?

  17. #42
    SitePoint Zealot tezza's Avatar
    Join Date
    Aug 2003
    Location
    Australia
    Posts
    155
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by lastcraft
    Hi.
    PHP is not pure OO because not everything is an object. This forces you to make design choices about smaller objects in the system. I don't consider this an advantage, but the lack of strong typing mitigates against this a lot. For example for collections (stacks, queues, lists) you can just use a hash. The cases where you want more (trees, lazy loading, caching) are less common and you usually know about them ahead of time. If there is any doubt at all though, I use an object.
    This is where Ruby kicks PHP's butt to kingdom come as a language <ducks flying debris>. Hey, I'm a PHP fanatic just as much as the next guy but I have no religious convictions. Like Bruce Lee said - "be water my friend", "don't limit yourself to a particular ``way'' ". I still won't be giving up PHP any time soon. I code PHP at work all day, then go home and code till 4am. then go to work at 10:30 and do it all again

    life is good.


    Quote Originally Posted by lastcraft
    OO languages are a design enabler in the same way as declaritive languages, functional languages and domain languages are. Dropping back to procedural coding to chase performance phantoms is pretty lame. We've moved on.

    yours,Marcus
    I so fully agree with this statement. The next time someone comes up with some lame argument about how they don't need OO for good PHP design, I'll point them to this msg.

  18. #43
    SitePoint Enthusiast BDKR's Avatar
    Join Date
    Sep 2002
    Location
    Clearwater, Florida
    Posts
    69
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Well, Since I know a jab when I see one, here goes...

    Quote Originally Posted by lastcraft
    Hi.
    I haven't yet said anything about the "pure OO" argument .
    Who said this was an argument?

    Quote Originally Posted by lastcraft
    PHP is not pure OO because not everything is an object.
    You're kidding right? In other words, neither is Java! Ruby? Yes. You've misunderstood what was meant by "pure OO". As an example, most OO zealots will jump all over you for directly accessing a referenced var. "Use getters and setters instead"! That's the kind of thing I was getting at when I said "pure OO".

    Quote Originally Posted by lastcraft
    Easy refactoring requires encapsulation. Avoiding duplication requires polymorphism. Going back to procedural code robs you of the ability of coming up with any design, never mind a well factored high performance one. If someone suggests "unfactoring" in this way they must obviously delight in constant challenge. Perhaps they should saw a leg off and poke one of their eyes out as well.

    Is performance worth it?
    You have a one track mind on this and an obvious axe to grind. Before assuming what is meant by 'defactoring' (which I think is a terrible term), realize that I asked what he meant by it.

    That said, consider the case of ADOdb. John Lim a number of months back did a comparison of ADOdb against Metabase, PHPlib, and Pear (forgive me if I failed to mention any of the contenders). ADOdb whipped all of them soundly. In the process of this, or should I say as a result of this, Manuel Lemos (the guy responsible for Metabase) screamed, kicked, and cried foul. However, John explained it rather well.

    [blockquote]
    When i first benchmarked ADODB versus native MySQL a year ago, I was horrified to find that my code was running 60% slower. So I worked hard to tune it until it got better, closer to 30% overhead, which is what it is today. Benchmarks are just another tuning technique if you have the right attitude.

    I would expect the PEAR guys to be looking at their code now, trying to see why their code has a 150% overhead. They haven't attacked me personally. Tomas V.V.Cox seems to be nicer than either of us. I really don't know what else to say because even if egos are at stake there's no reason to be so rude.

    Bye, John
    [/blockquote]

    Perhaps you should read it. 150% overhead for PEAR! I think that would matter to a client!

    http://php.weblogs.com/discuss/msgReader$1112

    Now he doesn't explain what he did to "tune" it, but in the link that I posted in an earlier post gives me a good idea of some of the things he may have done.

    http://php.weblogs.com/discuss/msgReader$537?mode=day

    But the most important thing I'm getting at here is that this wasn't about going procedural. The truth of the matter is that we hadn't yet determined what was meant by 'defactoring'. John managed to gain some good results from his objects, yet they remained objects!

    But back to the "pure OO" squawking, one of the things he said in the second link I posted above was...

    [blockquote]
    3. using set/get methods seems like a bad idea for frequently used properties.
    [/blockquote]

    Hardcore OO battle zealots will crawl up your chuff behind this kind of thinking. Now please note, this is the kind of dedication to paradigm that I was speaking of. The fact that people will get so upset is proof that some fundementalist toes have been stomped.

    Now before you go into why not to do this, don't worry about it as I allready know it and don't need to be flamed to hear it again.

    Quote Originally Posted by lastcraft
    OO languages are a design enabler in the same way as declaritive languages, functional languages and domain languages are.
    Well said and something that I agree with fully. However...

    Quote Originally Posted by lastcraft
    Dropping back to procedural coding to chase performance phantoms is pretty lame. We've moved on.
    That 150% overhead that PEAR posted was no phantom. If all objects that one creates perform this badly, then I would be pissed as a client. 150% just isn't acceptable! But how well do most people look at what they've coded into an object? You and I both know that a lot of people will design an object and never again give a second thought in regards to performance. And if all or a portion of the objects in a project behave this badly then there is some obvious room for perfomance without even dropping back to procedural code.

    But even still, that 30% for ADOdb was no phantom either was it? And for many web sites or other applications, the database becomes the bottleneck well before anything. Ask Slashdot about 9/11. Yahoo can tell you the same thing.

    In otherwords, I am more than willing to "go procedural" to gain that 30%. Everything I can do as a matter of fact before going to C or just "throwing more hardware at it" wich interestingly enough is something I hear alot from dedicated OOP'rs.

    Ultimately, the real question is "When" to do anything in particular. I'm about to start on an inventory tracking and control system for the warehouse of the company I'm temping for right now (yeah, I talked myself into additional work :-^). It's not likely that this will ever be a particularly heavy loaded system so why should I worry about it too much? I'm going to use Eclipse. All objects!

    Cheers,
    BDKR
    If you're not on the gas, you're off the gas!

  19. #44
    SitePoint Zealot tezza's Avatar
    Join Date
    Aug 2003
    Location
    Australia
    Posts
    155
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by BDKR
    Well, Since I know a jab when I see one,
    I didn't see any jabs.

    Anyone else notice any Jabs?

    Thought that post was jabbless.

  20. #45
    SitePoint Enthusiast BDKR's Avatar
    Join Date
    Sep 2002
    Location
    Clearwater, Florida
    Posts
    69
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by tezza
    I didn't see any jabs.

    Anyone else notice any Jabs?

    Thought that post was jabbless.
    "absorb what is useful, reject what is useless, add what is specifically your own."
    Does that sound like anybody you know? If I were you, I'd consider the position you're taking based on that picture of Bruce Lee you have there.

    Cheers,
    BDKR
    If you're not on the gas, you're off the gas!

  21. #46
    SitePoint Guru
    Join Date
    Nov 2002
    Posts
    841
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by BDKR
    That said, consider the case of ADOdb. John Lim a number of months back did a comparison of ADOdb against Metabase, PHPlib, and Pear (forgive me if I failed to mention any of the contenders). ADOdb whipped all of them soundly.
    Correct me if I am wrong, but as I recall in that test only the inner iteration loop was tested and parsing time and database connection time were specifically excluded. I posted benchmark results with adodb vs. PEAR a couple of weeks ago that measured the entire round trip for each package.

    In my simple test, adodb's additional speed in the inner loop seemed to be overwhelmed by the additional amount of parsing and setup time required. Of course, this might be different when using a PHP accelerator. It might also be different if you were doing more complex processing where you could amortize the overhead over more db work.

    However, by my definition of bloat at the start of this thread, adodb would seem to be the more bloated library. This really surprised me. I expected adodb to be faster than PEAR before I actually did the tests.

  22. #47
    SitePoint Enthusiast BDKR's Avatar
    Join Date
    Sep 2002
    Location
    Clearwater, Florida
    Posts
    69
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by Selkirk
    Correct me if I am wrong, but as I recall in that test only the inner iteration loop was tested and parsing time and database connection time were specifically excluded. I posted benchmark results with adodb vs. PEAR a couple of weeks ago that measured the entire round trip for each package.

    In my simple test, adodb's additional speed in the inner loop seemed to be overwhelmed by the additional amount of parsing and setup time required. Of course, this might be different when using a PHP accelerator. It might also be different if you were doing more complex processing where you could amortize the overhead over more db work.

    However, by my definition of bloat at the start of this thread, adodb would seem to be the more bloated library. This really surprised me. I expected adodb to be faster than PEAR before I actually did the tests.
    You are dead on the money as far as I'm concerned. If you read one or my earlier posts, I agree that ADOdb is a bit large. I am well aware of the parsing overhead and stated as much.

    Here is what I said.

    Quote Originally Posted by BDKR
    As far as talking about CRC cards, isn't that normally associated with Responsibility Driven Design? In my opinion, the CRC card isn't a needed part of RDD, but it helps and is often the tool used to teach the RDD idea to students. That idea being that various portions (objects or modules) of a program can be identified up front based on responsibility. I think this is a key step in helping reduce bloat! Many times, you come across objects (or should I say classes?) that contain methods that aren't likely to be used. That are seldom used if at all. As a result, you get the overhead of parsing larger objects without the justification of using them in totality. This is the reason that I feel that the Eclipse libraries make so much more sense for what most people are doing on their web sites. It's rather light weight compared to something like ADOdb.
    In other words, we are in agreement that ADOdb is a bloated lib. However, how much of an affect that has performance wise is dependent upon the use of an accellerator of some sort. Knowing that I can use an accellerator would have me choosing ADOdb over PEAR.

    Cheers,
    BDKR
    If you're not on the gas, you're off the gas!

  23. #48
    ********* Victim lastcraft's Avatar
    Join Date
    Apr 2003
    Location
    London
    Posts
    2,423
    Mentioned
    2 Post(s)
    Tagged
    0 Thread(s)
    Hi...

    Quote Originally Posted by BDKR
    Well, Since I know a jab when I see one, here goes...
    Oops, sorry! It wasn't meant to be and I wasn't thinking partularily of your post. I certainly wasn't attacking you personally, but the argument that undesign is a good thing for performance.

    Quote Originally Posted by BDKR
    You're kidding right? In other words, neither is Java!
    OK, my argument got a little muddled here . Part of my point was that things were OO by degrees anyway and that none of us were really pushing "pure" OO. We are all realists and differ by some small degree in how we mix our paradigms.

    Quote Originally Posted by BDKR
    As an example, most OO zealots will jump all over you for directly accessing a referenced var. "Use getters and setters instead"! That's the kind of thing I was getting at when I said "pure OO".
    Ok, I am now jumping all over you . You have exposed the internals and sidestepped your interface. This is really bad news and will cause problems anytime you change the implementation of the original. We would both call this defactoring. I would also call it bad coding. This has nothing to do with OO, but everything to do with anyone who will ever work with your code. There is nothing philosophical about this, I simply have a lot of grim experience working with code that does this.

    Quote Originally Posted by BDKR
    You have a one track mind on this and an obvious axe to grind. Before assuming what is meant by 'defactoring' (which I think is a terrible term), realize that I asked what he meant by it.
    That's a rather unfair comment I think. I am passionate that others not mak the mistakes that I have made over the years, that is all. Also I deliberately chose a definition of defactoring to isolate the case I was talking about. That is what I meant by groping forwards. We are all going to benefit from this discussion as the ideas that surround it are examined.

    Quote Originally Posted by BDKR
    Perhaps you should read it. 150% overhead for PEAR! I think that would matter to a client!
    I had read it. It caused me to take a cursory look at the PEAR database layer. Now the comparison is a little unfair and has absolutely nothing to do with procedural or anything else. PEAR performs particularily badly with MySQL because of it's habit of recreating sequences seen in other systems such PostGres, et al. You end up with double queries, etc, if I have understood it right. This is a design issue (hopefully addressed by now).

    Would 150% DB performance matter? No. Clients usually want to know the throughput versus the cost. That is a question of scalability. If I were to tell a client that the translation layer added a 150% overhead taking to the DB, but the system as a whole was comfortably within specs and budget, they wouldn't bat an eyelid. If the PEAR libraries save money in other ways compared with AdoDB then likely I would go with PEAR. Again, developer time is far more precious. That is not to say I would ignore performance.

    Quote Originally Posted by BDKR
    But back to the "pure OO" squawking, one of the things he said in the second link I posted above was...

    [blockquote]
    3. using set/get methods seems like a bad idea for frequently used properties.
    [/blockquote]
    I wasn't aware that I "squawked", but I'll take your word for it. I disagree with a lot of that article.

    Such comments show a poor understanding of the software development process as a whole if he is seriously suggesting inspecting the code and replacing accessor access with direct access for every guessed choke spot. Even if profiling showed that there were a very few points that this occoured I would still look at alternatives first. Also that code would be facaded off and forked in it's own CVS branch cul'de'sac. If the code changes, better to reoptimise the new version than waste development time working with the mangled one.

    Quote Originally Posted by BDKR
    Now please note, this is the kind of dedication to paradigm that I was speaking of.
    This has nothing to do with dedication to OO, but everything about keeping things flexible. If you really want a fight just ask me about Prolog !

    Quote Originally Posted by BDKR
    Now before you go into why not to do this, don't worry about it as I already know it and don't need to be flamed to hear it again.
    I'll take your word for it, but I am going to clarify things with an example below.

    Quote Originally Posted by BDKR
    150% just isn't acceptable!
    On the contrary, 150% may be perfectly acceptable. Unlikely though in something as critical as database access I grant you, but still possible. For example when using an internal SOAP server the network latency will swamp the DB access. If I am going to spend time optimising this system (when it's a problem and not before) I'll start with the network infrastructure. A fifty dollar network card may be enough if that's what my measurements tell me. If that is sufficient then I will move on to the next thing that gives me most business value. If not I'll tackle the next bottleneck. If that means swapping out the DB layer so be it. If that happens you will bet I'll be glad that I didn't access any variables (such as database connections) within the persistent objects.

    Is that acceptable (150%) for somebody writing a library such as PEAR or ADO DB? Clearly the release version of the library should be optimised. He gained 23% from all of this (very little I suspect from PHP code changes). OK pat on the head, but not for me a major enough breakthrough to cause me to dash out and rewrite my code. More interesting is the option of the C module that he offers. That was a design change (a compiled plug-in) not a code smithing operation and consequently far more effective.

    Quote Originally Posted by BDKR
    In otherwords, I am more than willing to "go procedural" to gain that 30%. Everything I can do as a matter of fact before going to C or just "throwing more hardware at it" wich interestingly enough is something I hear alot from dedicated OOP'rs.
    I have to say that we cannot proced unless we clear away this misunderstanding. I doubt that anyone has ever said throw more hardware at it for the sake of it (pointy haired boss accepted) and certainly not me. The point is that you have to step back and look at the whole system. Fiddling with PHP calls is damaging to the system and you certainly won't get 30% from that approach.

    If your colleague come up to you and says that you need to handle a ten times increase in traffic are you really going to start by hand optimising every line of PHP code? Hardly. You are going to look at where the bottlenecks are and change that part of the design. Once you have done that and achieved the ten fold increase would you go back and hand optimise the PHP code. Hardly.

    Quote Originally Posted by BDKR
    Ultimately, the real question is "When" to do anything in particular.
    Right.

    Now what worries me in all of this is that people reading may think I am saying ignore optimisations. I am most certainly not. There are big gains to be had from measuring a system and then choosing parts to be optimised. The big gains, though, are usually structural. Caching and lazy loading mainly, precalculating values and compiling can also have dramatic effects. By dramatic I mean orders of magnitude.

    Now we have had this discussion before...
    http://www.devnetwork.net/forums/vie...er=asc&start=0
    ...so this time I'll choose a newer example (from last month).

    We have a database system that handles full text searches (confidentiality prevents me from naming it). We wanted to search for a particular form of expressions, in particular lowercase only queries of multiple words. The query language of the database engine allows various regular expression matching tricks which we were using with test versions of our database. The test database is small (8 words) because we want the unit tests fast. We have developed the system without any performance tests in the full knowledge that once the design is right we can tackle this later.

    That time is now. A single run eats 100% of a two processor box for 30+ seconds. Out come the query planners, benchmarking tools and the PHP time() function.

    Our first act is to batch up the queries (several thousand). That took the load down, but actually made the time worse. Still a factor of two overall though for not much work. We had to change only one class interface although I have to say that a certain amount of luck was involved here. We were already batching queries up at a higher level.

    The next one was more crippling. Regexes were just too slow (even in the specialist engine) as it was defeating the index. We found that case insensitive searches were fine and so we created an extra copy of the data with case encoded as special characters. This added some hours to the offline processing as we were effectively precalculating the case matching. It also added a few gig of data to the server requirements. However it did the trick. That query now takes a couple of seconds, a forty five fold increase. Our development cost was an extra Perl script. No changes were needed to the main application at all for this as it was all encapsulated behind one class. Total cost two days, total gain 9000%, amount of code tweaking zero.

    Good design enables performance tuning. OO enables good design.

    Anyway, thanks for coming back at me (you get the rep) . you could call the optisation above "defactoring" also, although it was a change of design really given a new requirement. I have always thought of that as refactoring, but I'll go with the majority.

    There is more meat in this argument yet.

    yours, Marcus
    Marcus Baker
    Testing: SimpleTest, Cgreen, Fakemail
    Other: Phemto dependency injector
    Books: PHP in Action, 97 things

  24. #49
    SitePoint Enthusiast BDKR's Avatar
    Join Date
    Sep 2002
    Location
    Clearwater, Florida
    Posts
    69
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by tezza
    This is where Ruby kicks PHP's butt to kingdom come as a language
    Ruby doesn't kick PHP's butt. It's just different.

    Keeping that in mind, what do you make of this quote from the link you posted?

    Ruby is a complete, full, pure object oriented language: OOL.
    What's interesting about that statement is that they describe it as an OOL. In other words, there is a difference between an OOL (a noun) and OOP (an adjective). PHP is not an OOL (it's a hybrid, like C++ and objective C), but is still capable of OOP. C is not an OOL, but is still capable of OOP.

    ?

    Here is another quote for you.

    Working in an object-oriented language (that is, one that supprots inheritance, message passing, and classes) is neither a necessary nor sufficient condition for doing object-oriented programming. As we emphasized in Chapter 1, the most important aspect of OOP is a design techinque driven by the determination of deleagion of responsibilities. This technique has been called responsibility driven design [Wirfs-Brock 1989b, Wirfs-Brock 1990].
    This is from "Introduction to Object Oriented Programming" by Timothy Budd. A Professor at the University of Oregon.

    Think about that statement. It should lay to rest the idea that good design can only happen by using an OO language, pure or hybrid, and in particular, by using objects. OOP facilitates good design. That's it! It doesn't garauntee it nor does it somehow guide you into it automagically.

    As an example, here is a class that some poor Joe was asking for help with on another message board.

    PHP Code:
    class dataset 
      
    {
      var 
    $sets = array();

      function 
    pros() 
        {
        include(
    "dbs.php");
        
    $lk mysql_connect($serv,$user,$pass) or die(mysql_error());
        
    mysql_select_db($dbname[0],$lk);
        
    $qur "select * from pollqa order by id desc limit 0,1";
        
    $res mysql_query($qur) or die ( "Error display \n" .mysql_error());
        while (
    $row mysql_fetch_assoc($res)) 
          {
          
    $this->sets[0] = $row['qs'];
          
    $this->sets[1] = $row['ans1'];
          
    $this->sets[2] = $row['ans2'];
          
    $this->sets[3] = $row['ans3'];
          
    $this->sets[4] = $row['ans4'];
          
    $this->sets[5] = $row['ans5'];
          
    $this->sets[6] = $row['ans6'];
         }
        return 
    $this->sets;
        }
      } 
    Yeah, it's a class, sure..., but is it good design?

    In short...
    1)The idea that good design requires the use of objects is simply wrong.
    2) The idea that an OO approach or design requires the use of objects is simply wrong.

    Cheers,
    BDKR
    If you're not on the gas, you're off the gas!

  25. #50
    SitePoint Enthusiast BDKR's Avatar
    Join Date
    Sep 2002
    Location
    Clearwater, Florida
    Posts
    69
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Well, since it's past my bed time and I have all kinds of other work to do, I'm going to let this go for the most part. I will bounce off one thing (or is it two) thing(s).

    Quote Originally Posted by lastcraft
    Good design enables performance tuning. OO enables good design.
    I agree that good design enables performance tuning, but is OO the only route to good design?

    As for...

    Quote Originally Posted by lastcraft
    I have always thought of that as refactoring, but I'll go with the majority.
    I like the term refactoring myself.

    Thank you and good night,
    BDKR
    If you're not on the gas, you're off the gas!


Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •