As my experience in writing applications has grown, so has their size and complexity. I’ve never really thought much about efficiency because I’ve never had to. Not that I don’t try to write clean code, just that I’ve never been put to the screws. The server has handled the traffic load, the language and database engines have always done well.
It is my understanding that the number of HTTP requests is the usual bottleneck. And I imagine there is no “one size fits all” answer. But I’m wondering what problems others may have had with an application that didn’t scale and required refactoring.
For example, what went first, the code or the database processing, or something else?
More importantly, how did you solve the problem?
That’s what’s throwing me too. The numbers just don’t seem to go with it.
I guess if I ever get abusive of the server resources I’ll find out!
Still, it’s interesting to see that the database - or in the case the connections to it - is a weak point. I’ve always had kind of a blind trust that it could take whatever I threw at it.
I know messy databases can slow down WordPress, so maybe if my site was more database dependent it would be more of a problem.
Still, it won’t hurt for me to revisit my database code. No reason to postpone improvements now that I’m aware.
From what I can see conueries is something DreamHost made up, and basically is (number of queries)/(25 * number of connects).
More info: click
The word is a combination of “connections” and “queries”, as that is what the unit is derived from!
I’m tempted to think so, yes, as M usually stands for Mega.
But it seems odd, since Mega would suggest they’re either dividing or multiplying by a million (10^6) somewhere, which they’re not …
It’s been a while since I checked my server stats. But it looks like I don’t need to worry yet.
Avg daily Disk use 313.31 MB
Avg daily BW 58 MB
Avg daily conueries .116 MCn
I have absolutely no idea what unit an MCn is (unless it’s Mega Conuery), but as best as I could find even around 6 is no problem.
Yes, I saw that the other day. It seems DreamHost feels that connections are more resource intensive than queries. Makes sense to me, kind of like a server-side HTTP request between the server and the database engine or something.
At one time they slapped webmasters for having inefficient db code, but now they only slap the worst offenders.
I doubt if I can get 25 queries per connection - I’m certainly NOT going to add queries I don’t need. But I admit it would be worthwhile for me to revisit my code to see if I could tweak it a bit here and there.
I understand what ConnectionQueries are. I’m unsure what an MCn is, do you think it’s Mega Conueries?
As you said, there’s no one-size-fits-all answer. Your server logs will reveal whether or not you might be maxing out connections.
In my experience (which mostly revolves around enterprisey web applications) it’s the database that feels the strain first and that’s because the bulk of the work is done there.
Off the top of my head…
Split the database from the application/website to another server. (I totally underestimated just how big a performance gain this was, well in my situation anyhow…)
Move static content up onto a CDN to reduce HTTP requests.
More RAM for the server.
Use some kind of compression, eg.