SitePoint Sponsor

User Tag List

Page 5 of 6 FirstFirst 123456 LastLast
Results 101 to 125 of 130
  1. #101
    SitePoint Wizard silver trophybronze trophy Stormrider's Avatar
    Join Date
    Sep 2006
    Location
    Nottingham, UK
    Posts
    3,133
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)
    I use 1TBS as well, although I have no problem with Python's way of doing things, seems fairly neat to me and reduces the clutter by a fair amount.

  2. #102
    Unobtrusively zen silver trophybronze trophy
    paul_wilkins's Avatar
    Join Date
    Jan 2007
    Location
    Christchurch, New Zealand
    Posts
    14,526
    Mentioned
    83 Post(s)
    Tagged
    3 Thread(s)
    Quote Originally Posted by aamonkey View Post
    It's not an issue of how well the code is written, it's an issue of quickly being able to scroll through a giant file and quickly find a section that needs attention. This isn't a problem in a 200 line file, but a 2000+ line file (regardless of if your indenting/spacing is perfect) things are still hard to find quickly.
    The simple solution to that is to split the huge CSS file up in to several easier to manage and maintain files for development.

    When you produce the final result, you can automatically concatenated them together in to one file.
    Programming Group Advisor
    Reference: JavaScript, Quirksmode Validate: HTML Validation, JSLint
    Car is to Carpet as Java is to JavaScript

  3. #103
    SitePoint Wizard silver trophybronze trophy Stormrider's Avatar
    Join Date
    Sep 2006
    Location
    Nottingham, UK
    Posts
    3,133
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by paul_wilkins View Post
    The simple solution to that is to split the huge CSS file up in to several easier to manage and maintain files for development.

    When you produce the final result, you can automatically concatenated them together in to one file.
    Things are never finished and left alone for good... things evolve and so it will become a pain to recombine everything all the time. You can use a server side script to concatenate css files of course, but that's a bit of small extra overhead - why not do it yourself in the first place?

  4. #104
    Unobtrusively zen silver trophybronze trophy
    paul_wilkins's Avatar
    Join Date
    Jan 2007
    Location
    Christchurch, New Zealand
    Posts
    14,526
    Mentioned
    83 Post(s)
    Tagged
    3 Thread(s)
    Quote Originally Posted by Stormrider View Post
    You can use a server side script to concatenate css files of course, but that's a bit of small extra overhead - why not do it yourself in the first place?
    yes, it does take some extra effort to split them up and automate the development process, but many people have found the benefits worthwhile.
    Programming Group Advisor
    Reference: JavaScript, Quirksmode Validate: HTML Validation, JSLint
    Car is to Carpet as Java is to JavaScript

  5. #105
    Non-Member bronze trophy
    Join Date
    Nov 2009
    Location
    Keene, NH
    Posts
    3,760
    Mentioned
    23 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by aamonkey View Post
    It's not an issue of how well the code is written, it's an issue of quickly being able to scroll through a giant file and quickly find a section that needs attention. This isn't a problem in a 200 line file, but a 2000+ line file (regardless of if your indenting/spacing is perfect) things are still hard to find quickly.
    Funny, I always found that proper names on elements with a good naming convention, and putting elements together in the order they typically appear on the page -- combined with ^F eliminates that problem.

    Though it does often seem people forget to use ^F... kind of like the people who over-rely on broken 'preview pane' nonsense because alt-tab F5 is so hard to master... See the horde of "but it works in dreamweaver" from a couple years ago, which felt an awful lot like the older "but it works in IE" gripe.

  6. #106
    SitePoint Wizard
    Join Date
    Oct 2005
    Posts
    1,765
    Mentioned
    5 Post(s)
    Tagged
    1 Thread(s)
    Quote Originally Posted by aamonkey View Post
    times were 2.12 seconds vs 2.19 seconds
    I just don't believe including 1000 separate files takes only 7/100ths of a second longer than including one huge file.

    If the average seek time of the hard drive were only 3 milliseconds, assuming no other latency, that would take .003 x 1000 = 3 seconds to read all the files.

  7. #107
    Unobtrusively zen silver trophybronze trophy
    paul_wilkins's Avatar
    Join Date
    Jan 2007
    Location
    Christchurch, New Zealand
    Posts
    14,526
    Mentioned
    83 Post(s)
    Tagged
    3 Thread(s)
    Quote Originally Posted by cheesedude View Post
    I just don't believe including 1000 separate files takes only 7/100ths of a second longer than including one huge file.
    That's why profiling is so vital, because sometimes our beliefs can be different from reality.
    Programming Group Advisor
    Reference: JavaScript, Quirksmode Validate: HTML Validation, JSLint
    Car is to Carpet as Java is to JavaScript

  8. #108
    SitePoint Wizard
    Join Date
    Oct 2005
    Posts
    1,765
    Mentioned
    5 Post(s)
    Tagged
    1 Thread(s)
    Quote Originally Posted by paul_wilkins View Post
    That's why profiling is so vital, because sometimes our beliefs can be different from reality.
    That would have to be one blazing fast hard drive with all the files and file data sequentially located. Or, maybe the files were cached in RAM by the OS. First access usually takes the longest.

  9. #109
    Non-Member bronze trophy
    Join Date
    Nov 2009
    Location
    Keene, NH
    Posts
    3,760
    Mentioned
    23 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by cheesedude View Post
    If the average seek time of the hard drive were only 3 milliseconds, assuming no other latency, that would take .003 x 1000 = 3 seconds to read all the files.
    I was thinking those numbers were a little fishy -- seek alone to the first file is 30ms average... Unless those files were somehow stored in the write-back cache that seems off.

    Quote Originally Posted by paul_wilkins View Post
    That's why profiling is so vital, because sometimes our beliefs can be different from reality.
    and sometimes artificial benchmarks end up with results that have nothing to do with a real world situation. Just compare 3dmark to Far Cry 2/Crysis benchmarks on video cards. ATI wins in synthetics, nVidia wins in applications.

    ... and the numbers are automatically suspect when the result is physically impossible -- which is the issue being raised. Of course given the number difference in time is below the typical *nix timer granularity... I'd be interested in the code being used to make the test.

    Though that's why I usually prefer to run a benchmark to see how many can be done over a period instead of running a fixed number of iterations and seeing how long it takes. Timer granularity in things like php and javascript SUCKS.

  10. #110
    SitePoint Wizard
    Join Date
    Dec 2003
    Location
    USA
    Posts
    2,582
    Mentioned
    29 Post(s)
    Tagged
    0 Thread(s)
    When I get back to the office next week I'm going to write some tests to check this stuff out once and for all.

    What kind of conditions do you think I should test? The main test I'm going to do is check the difference between including a file 1000 times that has the following function:
    Code:
    function doSomething() {
      echo "Hello World";
    }
    and then an identical file which has 100 lines of:
    Code:
    // abcdefghijklmnopqrstuvwxyz
    I'll then compare the time for each include and the average times. I'll run this a few times to check for consistency.

    Anyone have any other considerations I should through in with this set of tests?

  11. #111
    Unobtrusively zen silver trophybronze trophy
    paul_wilkins's Avatar
    Join Date
    Jan 2007
    Location
    Christchurch, New Zealand
    Posts
    14,526
    Mentioned
    83 Post(s)
    Tagged
    3 Thread(s)
    Quote Originally Posted by samanime View Post
    Anyone have any other considerations I should through in with this set of tests?
    If you want to completely rule out caching issues, you may want to perform a full power off restart of the test server between each test.
    Programming Group Advisor
    Reference: JavaScript, Quirksmode Validate: HTML Validation, JSLint
    Car is to Carpet as Java is to JavaScript

  12. #112
    SitePoint Guru bronze trophy
    Join Date
    Dec 2003
    Location
    Poland
    Posts
    925
    Mentioned
    7 Post(s)
    Tagged
    0 Thread(s)
    These results are perfectly possible, you forget that the OS cache in RAM every file read by any application by default, so no disk access needs to be done to include all those files. Do a system restard and then the first run will be much slower. But that doesn't matter because on a live server all these includes will be cached in RAM almost all the time.

  13. #113
    Non-Member bronze trophy
    Join Date
    Nov 2009
    Location
    Keene, NH
    Posts
    3,760
    Mentioned
    23 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by samanime View Post
    Anyone have any other considerations I should through in with this set of tests?
    Well it does help to cover all the bases -- and to remember that while cache can help, cache isn't an endless resource like it is in an artificial testing environment. What caches great on your setup might be a train wreck

    Make the files all varying sizes of varying data not the same data, do not read them sequentially in the same order you wrote them to disk, use more than one file even for the large file size comparison, make sure the cache is empty after writing the files and do two passes -- first for no cache, second for cache. Test on two different filesystems (EXT3 vs. NTFS can be fun -- NTFS is faster reading sequential files on a clean disk since to 'avoid' fragmentation other filesystems spread the files out all over the place... Uhm, yeah...)

    ... and if possible, try it on a existing server under 80%+ load where prior to your test it's already using 90%+ of physical memory... which often means there's so much being cached the cache doesn't work at full capacity for the test.

    ... and if possible, test for a fixed period of time two different ranges of filesizes instead of just "read X amount and time it". Trust me, there's a difference thanks to timer granularity.

    ... and to truly be meaningful, a test should run at LEAST five seconds --- hence the need for more larger files.

  14. #114
    SitePoint Wizard
    Join Date
    Dec 2003
    Location
    USA
    Posts
    2,582
    Mentioned
    29 Post(s)
    Tagged
    0 Thread(s)
    Good tips. I'm planning on running it in a Virtual Box, so I can reboot it repeatedly pretty quickly.

    I think I'll start out with a simpler test that I can write in 10 minutes, then when I have more time I'll build a more full-force test using things deathshadow suggested.

  15. #115
    Non-Member bronze trophy
    Join Date
    Nov 2009
    Location
    Keene, NH
    Posts
    3,760
    Mentioned
    23 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by samanime View Post
    I'm planning on running it in a Virtual Box, so I can reboot it repeatedly pretty quickly.
    ALWAYS a good plan. VM's really have changed the face of development in that regard making testing SO much easier.

    Also a good way to test scaleability since you can just artificially restrict how much RAM is available to it to simulate a choked-out host...

    I kind-of wish VirtualBox offerred a way to timer-accurate throttle cpu time... akin to how dosbox does for games.

  16. #116
    SitePoint Wizard
    Join Date
    Dec 2003
    Location
    USA
    Posts
    2,582
    Mentioned
    29 Post(s)
    Tagged
    0 Thread(s)
    Yeah. We were using VirtualPC, then stumbled upon Virtual Box and love it. I can spin up a completely new development environment in minutes without spending a cent. =D

  17. #117
    SitePoint Guru aamonkey's Avatar
    Join Date
    Sep 2004
    Location
    kansas
    Posts
    953
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by cheesedude View Post
    I just don't believe including 1000 separate files takes only 7/100ths of a second longer than including one huge file.

    If the average seek time of the hard drive were only 3 milliseconds, assuming no other latency, that would take .003 x 1000 = 3 seconds to read all the files.
    As I said before, do your own tests and post your results. I certainly wasn't trying to say that the test I did should be taken as the way all servers/configurations handle things .
    aaron-fisher.com - PHP articles and more

  18. #118
    SitePoint Wizard
    Join Date
    Oct 2005
    Posts
    1,765
    Mentioned
    5 Post(s)
    Tagged
    1 Thread(s)
    Quote Originally Posted by samanime View Post
    When I get back to the office next week I'm going to write some tests to check this stuff out once and for all.

    What kind of conditions do you think I should test? The main test I'm going to do is check the difference between including a file 1000 times that has the following function:
    Record the time of the first attempt you make at this. While the OS may cache the files in memory, for a shared server with a lot of accounts, this data may not remain memory resident for long.

    As a test on shared server I am on, I wrote a simple script (months ago) which includes a file then outputs the time it took.

    Code:
    $start = microtime();
    
    include 'include_me.php';
    
    $end = microtime();
    
    $total = $end - $start;
    
    echo 'Took: ' . $total . ' seconds or ' . $total * 1000 . ' milliseconds.<br/>' ;
    Contents of include_me.php:

    Code:
    $x = 5;
    //echo 'included';
    On a first access, with low server load (load of 0.50 on 2 CPU shared server), it usually takes 4 - 5 milliseconds. Subsequent loads can take as little as 0.10 milliseconds. However, when the shared server is under load--something often beyond the control of even the server admin--I've seen that include take as long as 84 milliseconds on first access. Of course, this does not happen often as the host does not overload the server.

    My assumption is that the contents of the included file are cached in the server's RAM which is why subsequent includes take far less time. However, usually within less than 30 minutes of the last access, the time it takes is back to the first load time of 4 to 5 milliseconds meaning that it is no longer cached. I do not know the exact time the included file (and probably the original file) are removed from RAM. But I do know that it will vary with server load and that is something beyond my control. As such, I will always operate under the guideline that file access is the slowest part of rendering a website and will code accordingly.

    If you are on a dedicated server with an accelerator, file includes may not be a big concern of yours. But for the rest of us, file access is the biggest detriment to site performance. Efficiency should always be a goal in programming, especially with interpreted languages like PHP.

  19. #119
    Non-Member bronze trophy
    Join Date
    Nov 2009
    Location
    Keene, NH
    Posts
    3,760
    Mentioned
    23 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by cheesedude View Post
    I do not know the exact time the included file (and probably the original file) are removed from RAM. But I do know that it will vary with server load and that is something beyond my control.
    Exactly -- it's usually not a fixed time and more a factor of how often it's accessed (frequently acessed are less likely to get flushed), how small it is (small files are often LESS likely to be kept in cache... strange as that sounds)... and cache behaves like a futaba board -- as new content is called old stuff is flushed to make room.

    ... and different software uses different caching models, so what holds true for linux won't hold for freebsd... and sometimes it doesn't even hold true across distro's... compare RHEL to Debian for example, and you'll find differences in disk access times just because of different package choices and default settings.

    Much less a good server maintainer will customize their settings.

  20. #120
    SitePoint Guru aamonkey's Avatar
    Join Date
    Sep 2004
    Location
    kansas
    Posts
    953
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by cheesedude View Post
    As such, I will always operate under the guideline that file access is the slowest part of rendering a website and will code accordingly.
    Wrong mindset. If you are thinking about the speed of your site before writing a line of code you have already failed.
    aaron-fisher.com - PHP articles and more

  21. #121
    Non-Member bronze trophy
    Join Date
    Nov 2009
    Location
    Keene, NH
    Posts
    3,760
    Mentioned
    23 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by aamonkey View Post
    Wrong mindset. If you are thinking about the speed of your site before writing a line of code you have already failed.
    ... so you're basically saying writing code without a plan of attack and set of easy to follow guidelines -- and instead slapping commands in there with no concern for how long they take is the way to go then?

    Sure let's run fifteen IF statements on the same variable in a row, break twenty line config files into twenty separate files, create functions for each and every small operation we might ever want to do... open twenty separate mysql connections -- one for each table and leave them open as long as possible while at it...

    SOME planning and basic rules/guidelines can help you write efficient code without extra effort -- it just involves a few common sense rules from the start; minimize disk access, store conditional results instead of calling them over and over, use efficient comparators like switch/case... That's not "already failed" -- Already failed would be going in with no plan of attack, no outline of data flow and just spaghetti coding.

    Lemme guess, hate flowcharts and/or dataflow prototyping?

  22. #122
    SitePoint Guru bronze trophy
    Join Date
    Dec 2003
    Location
    Poland
    Posts
    925
    Mentioned
    7 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by aamonkey View Post
    Wrong mindset. If you are thinking about the speed of your site before writing a line of code you have already failed.
    Don't let's go into extremes, in this case I must agree with DS. While premature optimization is in many cases a waste of time, some initial planning and thinking about performance is important. There are simple things that can be done when developing that could be more difficult to fix and refactor once the site is live and getting slow. Therefore, I'm always mindful:

    - to avoid sending many db queries in a loop
    - to set indexes on columns I search by when the table is expected to get big
    - not to make too many file accessess, I'm not paranoid but more than 100 would be too much in my opinion
    - to release memory of large result sets
    - at least plan my code so that adding some caching will be easy when needed
    - optimize code for speed if it doesn't hurt the readability and flexibility of the application

    Quote Originally Posted by cheesedude
    If you are on a dedicated server with an accelerator, file includes may not be a big concern of yours.
    I'd like to get some confirmation about this one. Even if there is opcode cache, the cached files are stored in memory and then, when memory runs low, on disk so making many includes will result in similar RAM or file access as when the OS does this without an accelerator. Unless an accelerator is able to combine many includes into a single one but I think that would be pretty difficult to do. I don't know the internals of php accelerators so it would be good if someone more knowledgeable explained that. As I said earlier, I tested performance of file includes on a server with eAccelerator and it was pretty poor.

  23. #123
    SitePoint Wizard
    Join Date
    Dec 2003
    Location
    USA
    Posts
    2,582
    Mentioned
    29 Post(s)
    Tagged
    0 Thread(s)
    I think the truth is somewhere in the middle.

    Being concerned with the speed of your program from the get go is not a bad idea.

    However, at the same time, premature optimization can also be, at best, a huge time-sink.

    I write code in a very "module" way (meaning each of my portions of code are largely free-standing and just perform operations on input given to it). With this method, I'm able to quickly figure out how quick it is (usually they're so simple I can just glance and tell you the big-O notation... if I can't it means I should probably look it over a little closer). Doing this I can get it good as I code, then go back and get it better if I need to.

    Actually, I just realized, how did we get off on this tangent of file access times? =p

    The original topic was on whether comments greatly affect execution time of a given file. I guess it does factor in if you're worrying about if comments need to be reprocessed on each include or not, but even in that case, file access is a moot point as it'd be (roughly) the same anyways.

    =p

  24. #124
    SitePoint Guru aamonkey's Avatar
    Join Date
    Sep 2004
    Location
    kansas
    Posts
    953
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by Lemon Juice View Post
    Don't let's go into extremes, in this case I must agree with DS. While premature optimization is in many cases a waste of time, some initial planning and thinking about performance is important. There are simple things that can be done when developing that could be more difficult to fix and refactor once the site is live and getting slow. Therefore, I'm always mindful:

    - to avoid sending many db queries in a loop
    - to set indexes on columns I search by when the table is expected to get big
    - not to make too many file accessess, I'm not paranoid but more than 100 would be too much in my opinion
    - to release memory of large result sets
    - at least plan my code so that adding some caching will be easy when needed
    - optimize code for speed if it doesn't hurt the readability and flexibility of the application
    And I would say that those are all great ideas. Let me clarify my point: If you are developing favoring speed over maintainability or readibility, then that is flat out wrong. There are many, many facets that a web developer has to juggle - the raw speed of the application is just one of them. But it should certainly rank well below producing readable, reusable, maintainable code.

    Leaving out code comments that aid in development because you are afraid of performance impact is silly. As is trying to cram your code into as few files as possible because you are worried about shaving milliseconds off your execution time. You end up with a big pile of blazing fast mess that is a nightmare to work on in the future and doesn't help your subsequent projects. It goes beyond premature optimization - it's completely starting out on the wrong foot.


    Quote Originally Posted by samanime View Post
    Being concerned with the speed of your program from the get go is not a bad idea.

    However, at the same time, premature optimization can also be, at best, a huge time-sink.
    I agree completely.

    Quote Originally Posted by samanime View Post
    I write code in a very "module" way (meaning each of my portions of code are largely free-standing and just perform operations on input given to it). With this method, I'm able to quickly figure out how quick it is (usually they're so simple I can just glance and tell you the big-O notation... if I can't it means I should probably look it over a little closer). Doing this I can get it good as I code, then go back and get it better if I need to.
    That's what I'm referring to. When you are not afraid of separating your code into relevant files, you end up with reusable code that ends up becoming refined over the years, potentially saving hours/days/weeks/months of development time.
    aaron-fisher.com - PHP articles and more

  25. #125
    Non-Member bronze trophy
    Join Date
    Nov 2009
    Location
    Keene, NH
    Posts
    3,760
    Mentioned
    23 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by aamonkey View Post
    Leaving out code comments that aid in development because you are afraid of performance impact is silly.
    CORRECT, it just comes down to what you consider a pointless waste and what you consider useful...

    WASTEFUL/POINTLESS comments that don't add ANYTHING you should already have through verbose names and indentation should be avoided at all costs...

    Like a giant run of pointless characters as a divider to do the job that a de-indent and opening/closing element do just fine... or is the left side of the screen magically invisible or something?


Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •