That depends
133 queries per second and ~17 connections per second is not weird for servers running several frequently visited websites.
If your server does however not host several frequently visited websites these numbers are quite high.
If this is an inhouse testing server i’d say these numbers are normal, and everyone is doing their job well
The server is running one heavily trafficked website. I was trying to monitor it during a particular spike, as I cannot figure out why the server is sort of crashing.
The load does not increase whatsoever (well, maybe a half) during spikes, but the server goes offline when getting drilled now. Though the reports show it staying up. I thought maybe the mysql was limiting connections and stopping the site entirely.
When using a MyISAM table, and you have a thread running somewhere that inserts data into a table, but the thread hangs, all other threads can not read the table, since it is locked by the writing thread.
If this goes on long enough MySQL hangs itself (so to speak), and takes the whole server down with it. I had this on my server several times (solved it my giving the inserting PHP thread a time out, it was run via CLI, which by default doesn’t have a time out).
With InnoDB this is not a problem since it has row-level locking (it does however have its own disadvantages).
Yes, we’ve had problems with the locked tables (through updated views counting) before and have been working on at least an innodb table designed just for keeping tracking of impressions of content.
I’ll watch my max connections and see if that is the hold up.