And yet MediaWiki does use a static file cache 
Dude, you really need to research more before making pronouncements like this. You’re starting to sound like Kalon.
- Huge websites tend to have a lot of traffic, which means they require a cluster server or sort of a load balancer. They usually have a cluster that’s responsible for handling requests and a cluster that’s in charge of the database. Such decentralized environment cannot rely on what we call filesystem.
Samba share and more than a couple other distributed file systems would like to have a word with you…
The largest sites use a hybrid approach which rely on the fact that there are more reads than writes in almost any CMS system (just compare view to post stats on this very forum or any forum). The static file is cached after creation and remains until the next update occurs. With many CMS’s, especially news sites the static file may not be updated more once an hour or so, or even less.
Slashdot does this - it’s why your post may not appear on another computer for up to 2 minutes, and why comment counts on the front page are never very accurate.
MediaWiki, the engine behind wikipedia, uses this sort of caching. Any guest version of a “page” is stored in filesystem cache such that neither PHP nor the database is required to be active for the page’s delivery.
And as I mention at the start of this post, even if you are using load balancing there are file mirroring applications to address the distribution problem, and properly implemented the PHP software doesn’t need to be aware they are in place at all.
Database reads are a bottleneck. Few sites experience the kind of traffic necessary to drive this point home though, so to some extent worrying about it is over engineering. Static files have their place - the webserver can always serve up statics faster than any PHP script.
But in database’s defense - they will be faster than a filesystem for pulling disparate pieces of information together to form a page. This, and the ability to create derived information (like, say, doing ledger totals on a table column or positional calculations using trig to show locations within X miles), is where they rule the roost, and will continue to do so.
why not just get the request, pull the data from the memory (RAM) and send it? We just avoided the call to the database, we avoided asking OS to find the cache on HDD - basically what happened is that we avoided any sort of expensive call to obtain relevant info.
Most webservers either do this with static files already, or have extensions that allow for it.