OOP PHP Inefficient?

Also, re: sessions, you can run memcached on a pool of servers (or the same machine as your web server), and [url=http://php.net/manual/en/session.configuration.php#ini.session.save-handler]configure PHP to use it as your session handler instead of the default file storage. Much more scalable that way. As with most things in PHP, TMTOWTDI.

cholmon,
Aren’t you a little young to be playing with servers? (Your photo.)

There is a trend to remove useless comments from code, or to improve code so that it doesn’t require comments…

Chapter 4 of Clean Code covers the topic in quite full detail.

Basically, comments don’t make up for bad code. You should explain yourself in the code rather than relying on comments to do the explaining for you. Comments that explain the basic intent of the code can be okay, but it’s better ti use the name of the functions to do that. Explanations of intent appear to be the best use for comments.

Often times comments are redundant, where they provide no more information than can be gleaned from reading the code itself, and often-times comments can be misleading. It’s all-too easy for code to be updated but to leave the comments are they were. This leads to dangerous territory where the reader doesn’t know which one should take precedence.

Well named functions and variables go a long way to prevent the need for comment as well.

Here is an example of what they’re talking about. These two code samples demonstrate converting a prime generator that is full of seemingly needful comments, to code that requires very little in the way of comments at all.

From GeneratePrimes.java to [url=“http://ion.uwinnipeg.ca/~rmcfadye/1903/comments%20example/PrimeGenerator.java”]PrimeGenerator.java

i’m not playing, i’m doing big-boy work too! :frowning:

As with any codebase, everything depends on how it’s been setup.

In theory, well-designed OOP architecture will be much more efficient than procedural, because it breaks things up into individual components by nature. So in many cases when you want to do something simple with a good OOP setup, you can just include and use the few components you need instead of including large ‘functions’ files with lots of defined functions in them that do a bunch of different things. OOP design can also yield other efficiencies by doing work for you in the most efficient way without you even realizing it or thinking about it, as many ORMs do.

In my personal and professional experience, very few codebases that I have worked with that are supposed to be “object oriented” actually are, and they do indeed wind up being much heavier and more clumsy to work with, especially as the codebase grows over time. I have been working on a project recently that I actually really do wish was just php4-style spaghetti php, because the supposed “object oriented” design really is that bad. That’s painful and tough for me to say, because I myself am a huge proponent of using object oriented methodologies - It’s just that I very rarely see them used and applied in ways that I feel they should be in the real world, which seems to be what you are seeing as well.

To help steer others away from bad ways of doing things, can you use your experience about how not to do things to highlight for is about the ways that OOP was used badly, and how they can be improved upon?

Absolutely. Most of the problems stemmed simply from a lack of knowledge of basic programming principles and object oriented design methods.

  • Classes are used to stuff functions in for organizational purposes and called statically throughout the application.
    This creates a hard dependency on the class name itself, and eliminates any possibility of extension without modifying the code of class file itself, which is a huge no-no because it can have adverse consequences on the rest of your codebase that relies on it.
    Never use static method calls. The only valid use for static method calls is for singletons and (maybe) a registry. The reason is because they are extremely inflexible and create hard dependencies on the class name itself.

  • New objects are instantiated within functions of other objects instead of being passed into the function or object constructor upon instantiation.
    This creates hidden dependencies where objects you try to use out of the context of the larger application are dependent on several other objects and static classes you have to take along with you. Then those objects have hidden dependencies on others, etc. and the cycle continues until you end up just copying or referencing the entire code library to your new project.

  • There are no interfaces to enforce what controller or model methods are required for the section to function properly.
    People new to the project get extremely confused and irritated when something isn’t working right and there is no code enforcement to help them figure it out - just weird and obscure errors that appear when you don’t do things right.

  • Class hierarchies more than 5 levels deep in many cases.
    This is a huge point of contention between me and some of the other developers that setup the codebase. Essentially, every time a new section is created, they make an “Abstract” controller or model for that section that everything in that section will extend from (and I put it in quotes because the class is never actually defined as abstract in php). That new abstract class of course extends the subsection abstract class, which in turn extends the type abstract class, which extends the base abstract class for that type, which extends the base class. And no, I’m not kidding. Code re-use doesn’t equal moving it down to a lower layer and creating more layers of abstraction.
    This makes it extremely frustrating to troubleshoot bugs and other issues when you have to trace the problem back 5 classes deep, and then can’t change it because everything else extends from it, so you have to copy and paste(!) the whole function (sometimes without calling the parent) and make your modification in your concrete class for it to work. It’s our built-in bug generation suite.

There is so much more, but I don’t really have time to continue. It’s almost as if the original authors of the codebase read How to Write 3v1L, Untestable Code, thought it was serious instead of tongue-in-cheek, and followed every “suggestion” perfectly. A recipe for disaster.

Public interfaces to methods etc should be commented when they are for use by the external public. The issue people have with comments internally is there is no enforcement on them keeping up to date, they drift and also remove the need for concise thought when naming and desiging classes, methods and variables etc. They effectively give a safety mechanism for not doing this. It takes a lot of though namingt and designating responsibilites well, sometimes a multiple of thought than the actual coding logic that uses them or is encased by them. The comment effectiveley becomes a brain splurge to get to the meat quicker. Hungarian notation has some similarities to this as it also becomes a crutch for bad naming, good naming removes it’s need.

On closed projects TDD removes the need for comments as assertion messages are used instead, unlike comments when the functionality changes they shout that it has changed and either the code needs to be fixed or the assertion ammended to reflect the change. It is semi forgivable to not update comments as they lack conciseness, vary on quality and appearance due to the writers belief on what is an important comment so end up being ignored. The more internal comments written the less value the real needed comments have( usually ones due to code that has to circumvent behavioural anomolies so looks wrong but should not be carelessley refactored). Quantity of internal comments due to a general need can be a sign that the code is obscure and needs reworking.

TDD can also become a crutch for bad naming and implementation by the writer if they prioritise writing the test over those who will come back and read the test and code. Too much thought is directed in the wrong area, usually to get the test over and done with to get to the code. The test works but it is big and ugly, sometimes magnifying the over complicatedness of the code and sometimes just because of too much copy and paste. I don’t know why sensibilities can change when it comes to tests or views but it seems to in some, readability goes out the window.

Bad code( code that has distinctive code smells ) I do not trust without reading closely so I definitely do not trust it’s comments, both rely on the ability of organising thoughts. Weakness in one implies weakness in the other. Writing comments for other people is no different than writing code for other people. The code is just the harder way.

Comments on public interfaces are okay as when calling them IDE’s display them( parameters, catachable exceptions and return types are very handy). It removes the need to enter the code and forcing someone to enter code and understand impleementation they do not need to is wasting their time and distracting them from their current task. In open projects this is more important as the users probably have very little instinctive ability with the code base until they become heavy users and have got to grips with it’s coding conventions.

Writing bad code with comments is far easier than writing good code without. What is good code and how to refactor towards it is a far bigger set of arguments with many opposing camps. Though there is a large concensus that code should read like english as much as possible, IDE’s allow this as the speed benefit of terseness is greatly reduced.

Not sure how the quality of Joomla is as a coding reference point. If you feel the need for a comment the question should be what is wrong with the code for it to be needed. It might seem academic in some ways but after a while the ugly in the code starts to show very quickly( before anything else is read ) and as comments can be seen as the collaborators of uglinesses stopping comments helps stop the ugliness. It causes headaches for the writers of the ugliness who use comments for crutches and a bit of discomfort can be very good for learning/rectifying behaviour. Personally I’d prefer a more direct dog whisperer approach as well but that is not embraced by common working culture.

The developers of Joomla may of gained a pack instinct that has caused a shared instinctive understanding of what they communially write with little understanding of those who do not have that instinct, pack thought makes them believe comprehension of their code is easy. This is actually quite a common thing with any group of people, there would be a lot more arguments without it as it is a binding force.

Black and white rules usually appear in teams as blanket reactions to commonly occuring problems that do not require thought while doing, it stops many creative resolves being formed without good backing argument. Like any blanket rule some may follow blindly incurring other problems while others will argue for refinement of the rule so those new problems will not occur or stop occuring. That refinement can become more of a political point within a team as it relies on everyone to think and agree. Not everyone really likes to think or being forced to think and will argue that is just the way we do things, sometimes possibly the reason was never really understood or maybe has now even been forgetten. Monkey see monkey do.

Comments definitely are not architectural plans whatever their level of occurance. For an OOP coding style this may be quite interesting.
http://blog.objectmentor.com/articles/2009/09/11/one-thing-extract-till-you-drop

Efficiency over readability is not something to be done lightly and should be a specific reaction to a specific problem that has been thoroughly explored. There is a higher level of comprehension at the moment of writing than a later moment of reading so that also has to be taken into account. If large amounts of effort are needed while writing in the now to sustain comprehension due to current eyeball level complexity, an impossible level may be required of someone reading in the future. This applies to procedural or OOP.

Though as experience grows what was hard can now be easy and what was easy can now be hard in comparison so is definitely not a personal rule to set in stone as it requires constant personal revisement.

I take too long to type stuff and a lot of it comes from reading material like Clean Code which has been mentioned earlier :slight_smile:

Ah!
Code reuse when there are lots of hierarchies. Those kind of code which goes down deeper and deeper into cannot even reuse for the project they have developed. Because no one dare to see and understand those project.
but i know many people who like those kind of code in the name of modularity / code reuse etc

I’ve been doing some reading on this HipHop and it’s not just a PHP to C++ compiler. It’s a Web server replacing Apache. I’m not sure how many Web hosting companies would allow it to run.

To be honest, I think if you have the type of application where the idea of using hiphop becomes a reality, you are likely to be running a dedicated server anyway.

C++ isn’t web code on it’s own - and without writing an apache extension, it wouldn’t be able to interface with a web server, and there are several speed enhancements that can be gained from running the site as a separate application that listens on port 80 for traffic and responds accordingly. It can be optimised for the site in question rather than having to support many different possibilities and applications.

That’s what it sounds like, currently.

However, I’ve been involved in their group discussion and it sounds like they would appreciate some help incorporating this into the open source community.

I have volunteered to re-write some PHP extensions in C to be thread-safe, since I’ve written UNIX and Windows device drivers. Converting some libraries to pure-functions, examining dependant function calls source code for their same pure-function support, using a mutex for share resourses, etc. Testing for dead-lock conditions, race conditions, and using stack instead of heap resourses, re-entrant functions calls, etc.

Since it’s contributing to the global open source community, it will be that much more rewarding.

-=- Craig

As has been mentioned before blaming objects for the speed of an application is like blaming a car’s rear view mirror for an engine missfire.

The real problem is poor code built by programmers that don’t really understand objects or for that fact databases.

I’ve just hacked my way through a Joomla site and the code I found was nothing short of hideous and as pointed out by the OP their commenting is nothing short of a joke. Some people build ‘objects’ (for lack of a better description) just to build objects. I think they attempt this just because they heard objects are cool but when you look at these so-called objects you really have to wonder.

I’ve done OOP in both Java and PHP for many a year now and have yet to see an object itself cause ‘speed’ issues (that’s objects I have built). What I have seen in many of these OS projects is code that doesn’t allow each part of the application to do it’s own job. One example I can come up with off the top of my head is an object that retrieves and sorts data from a database when in fact the database itself can do this work 100 (if not 1000) times faster. Another that comes to mind is something another ‘developer’ asked me to look at. He needed to determine the age of someone using their birth date and the date they died (which came from the database). He had over 900 lines of code parsing the data six ways to Sunday when in fact it could have been done using the database in 4 lines of code.

People that feel they have to ‘reinvent the wheel’ in their own objects when there is already a tried and true method of doing something cause a LOT of this slowness you see in some of these things.

Facebook problem is something different.
For normal site using source code is enough.
PHP need not be parsed for every request, but for Facebook, they need to take care of lots of info and that changes to frequently.

As has been mentioned before blaming objects for the speed of an application is like blaming a car’s rear view mirror for an engine missfire.

PHP’s OOP approach is more like as the racing light turns green, let’s start assembling an Indy car for a drag race, then disassemble it at the end of the race. Op code caching seems to address this problem though.

I’ve just hacked my way through a Joomla site and the code I found was nothing short of hideous and as pointed out by the OP their commenting is nothing short of a joke. Some people build ‘objects’ (for lack of a better description) just to build objects. I think they attempt this just because they heard objects are cool but when you look at these so-called objects you really have to wonder.

I agree. I’ve also, like you, come across questions at this site that demonstrated processing on data in PHP that should be done on the database server, as simple as counting rows.

The real problem is poor code built by programmers that don’t really understand objects or for that fact databases.

thats the problem…

i’m coding oop since php supports it and never had speed problems! if you are planning right and THEN implementing object oriented, nothing can go wrong and the project will be extendable and easy to handle…

oop never is the problem, it’s the “BUG” 25cm in front of the screen! :rofl:

OO was always slower than structural programming. However OO has other strong points. The same principles applies to PHP imho.

Is the use of classes and oo concepts in PHP. The same happens in c++. If you write the same program in simple c it runs faster than in c++ even if you use the same language. Object oriented programming introduce an extra layer that slows the program.

That is not true.

That is a gross oversimplification, bordering on being flat out wrong.

OOP is short for Object Oriented Programming. It generally refers to the use of [URL=http://php.net/manual/en/language.oop5.php]classes and objects.