Updating visitor hit count takes too long

Greetings,

I have a MySQL query that is taking 2 full seconds to update the hit count:

UPDATE items SET visitorcount = visitorcount + 1 WHERE id = 123 LIMIT 1

The “id” column is a Primary index, the table is InnoDB, and there are only 50,000 rows in this table, which isn’t much… This seems too long and I don’t understand it.

Does anyone know why this happens and what the solution is for this?

Thanks
Kind regards

Get rid of the “LIMIT 1” you don’t really need it, unless you’re expecting to have more then one row in the items table with the same id

This still doesn’t help. I actually added “LIMIT 1” in an attempt to help speed up the query.

Can you please post the output of a SHOW CREATE TABLE query for the items table?

Here is the SHOW CREATE TABLE query for my items table:

CREATE TABLE `items` (
 `id` int(10) NOT NULL AUTO_INCREMENT,
 `membername` varchar(30) NOT NULL,
 `catid` int(10) NOT NULL,
 `offerid` int(10) NOT NULL,
 `wisher` varchar(30) NOT NULL,
 `denomination` varchar(100) NOT NULL,
 `type` varchar(100) NOT NULL,
 `title` varchar(80) NOT NULL,
 `subtitle` varchar(80) NOT NULL,
 `certagency` varchar(30) NOT NULL,
 `certnumber` varchar(50) NOT NULL,
 `grade` varchar(100) NOT NULL,
 `description` mediumtext NOT NULL,
 `own` varchar(5) NOT NULL,
 `sell` varchar(5) NOT NULL,
 `want` varchar(5) NOT NULL,
 `auction` varchar(4) NOT NULL,
 `listtype` tinyint(1) NOT NULL,
 `makeoffer` tinyint(1) NOT NULL,
 `live` tinyint(1) NOT NULL,
 `days` tinyint(2) NOT NULL,
 `quantity` int(10) NOT NULL,
 `reserveprice` decimal(10,2) NOT NULL,
 `sellprice` decimal(10,2) NOT NULL,
 `currentprice` decimal(10,2) NOT NULL,
 `buyprice` decimal(10,2) NOT NULL,
 `shipprice` decimal(8,2) NOT NULL,
 `image1` varchar(100) NOT NULL,
 `feelock` varchar(3) NOT NULL,
 `created` datetime NOT NULL,
 `bumptime` datetime NOT NULL,
 `updated` datetime NOT NULL,
 `endtime` datetime NOT NULL,
 `auctionend` datetime NOT NULL,
 `statuschange` datetime NOT NULL,
 `storelevel1` smallint(5) unsigned NOT NULL,
 `storelevel2` smallint(5) unsigned NOT NULL,
 `storelevel3` smallint(5) unsigned NOT NULL,
 `storelevel4` smallint(5) unsigned NOT NULL,
 `storelevel5` smallint(5) unsigned NOT NULL,
 `storelevel6` smallint(5) unsigned NOT NULL,
 `visitorcount` int(5) unsigned NOT NULL,
 PRIMARY KEY (`id`),
 KEY `catid` (`catid`),
 KEY `listtype` (`listtype`),
 KEY `denomination` (`denomination`,`type`),
 KEY `membername` (`membername`),
 KEY `wisher` (`wisher`),
 KEY `offerid` (`offerid`),
 KEY `auctionend` (`auctionend`),
 KEY `created` (`created`),
 KEY `bumptime` (`bumptime`),
 KEY `sellprice` (`sellprice`),
 KEY `shipprice` (`shipprice`),
 KEY `buyprice` (`buyprice`),
 KEY `reserveprice` (`reserveprice`),
 KEY `storelevel1` (`storelevel1`),
 KEY `storelevel2` (`storelevel2`),
 KEY `storelevel3` (`storelevel3`),
 KEY `storelevel4` (`storelevel4`),
 KEY `storelevel5` (`storelevel5`),
 KEY `storelevel6` (`storelevel6`),
 KEY `live` (`live`)
) ENGINE=InnoDB AUTO_INCREMENT=67489 DEFAULT CHARSET=utf8

Ouch. Are you sure you need all of those attributes in your itemsrelation? And do you really need to have 20 of them indexed? Indexing attributes, whilst speeds up search query response times, massively slows down insertion times. If I were you, I’d ensure that your relation is properly normalised and then I’d remove all indexes. Once that’s done, then add back only the ones that you really need (sparingly!).

Also, using the LIMIT clause in your initial query won’t speed it up since your id attribute is a primary key anyway, and so there should only be one record being updated with the primary key’s ID.

1 Like

Unfortunately, I don’t know much about properly Normalizing databases.

Would this mean creating a new separate table called “itemvisitorcount” with a PRIMARY “id” and “visitorcount” column… Then copy the id,visitorcount from the “items” table into the “itemvisitorcount” table and just use this one from now on (JOINED to items table whenever needed)?

This looks weird, I think even if the table is not designed optimally the update should be quite fast. Have you tried running OPTIMIZE TABLE? Are you sure there are no simultaneous updates on the same table happening at the same time and thus resulting in table blocking? Did you check if the same problem happens on a different server (for example your development server)?

UPDATE:
I created a very simple separate table for keeping track of the total visitor count for each item (50,000 rows):

CREATE TABLE `itemvisitorcount` (
 `id` int(8) NOT NULL,
 `visitorcount` smallint(5) unsigned NOT NULL,
 PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

Here is the PHP code for updating the count:

mysql_query("UPDATE itemvisitorcount SET visitorcount = visitorcount + 1 WHERE id = $itemid") or die(mysql_error());

This query takes 2 seconds to execute…

It’s weird because sometimes it takes only 0.20 seconds and other times longer. This is an InnoDB, so the table shouldn’t be locked if there are a few update queries on the same table at the same time.

I call a “SELECT” query to join this table with my main items table (on the same “id” number) a few queries before the slow one, not sure if that can cause a “deadlock”? I don’t see any deadlock error message…

Does anyone know how to make this simple query run CONSISTENTLY fast?

Thanks
Kind regards

I think you simply may have an overloaded server that performs too many disk operations at the same time or is simply not tuned well for performance. By default every update to an innodb table results in writing the data to disk immediately and the query will not finish until the data has been saved physically - this can take time on an IO busy server. I have a similar visitor count update query on one of my (quite busy) sites and it performs very fast. But I had cases where innodb updates could perform slow even when updating by the primary key - this was more often the case on shared hosting - keeping sessions in a innodb table that was updated on every page load was not a good idea then.

Try changing innodb_flush_log_at_trx_commit to 2 or 0 (default is 1). This may mean a little bit less reliability in case of hardware failure but I think the risk is minimal. You may need to restart mysql after this change. This should speed up innodb updates (and inserts) a lot.

If you are not allowed to change this variable on your server then try changing the visitor count table to myisam. Perhaps visitor count is not any critical data and you could do without foreign key constraints.

I changed the table back to to MyISAM since this is a shared MySQL server and I don’t have access to the variables. It seems like the counter is working faster (for now).

I though InnoDB was supposed to be the best for a lot of Writes? That’s why I changed the table to MyISAM to InnoDB because there was a lot of visitor count UPDATE queries. What do you recommend in the case when my site has Millions of items, will MyISAM be scaleable or will it eventually need to be changed to a slower InnoDB?

Thanks
Kind regards

It depends. InnoDB has the advantage of locking only the rows you update instead of the whole table so theoretically this should be better for many concurrent updates. However, on the other hand Innodb is made to be immune against system/hardware crashes, that’s why updates are flushed to disk immediately bypassing any OS I/O buffering if possible, which causes most overhead. Myisam’s update is more like a regular file update - it first goes to an OS file system buffer and after a short while it’s flushed to disk - obviously this performs much better.

I don’t really know because I haven’t yet dealt with such huge data sets and didn’t get the chance to compare InnoDB with MyISAM in such scenarios. But I think if you have this simple MyISAM table for visitor count it should work fine for quite some time as your data grows - the updates indeed lock the whole table but are much faster. I wouldn’t worry about it at the moment. If you get so many visitors you will first have other important things to do like moving to a dedicated server :slight_smile:

I presume InnoDB works well for frequent updates when you have a dedicated server tuned specifically for db. Shared hosting companies often host both the web server and the db on the same machine and then various processes fight for execution, and when they use up lots of I/O activity then db performance suffers, which is especially true for innoDB that needs to have fast and immediate access to disk for updates and inserts.

From experience I can say something comparable about frequent inserts, which should be quite similar to frequent updates. On a busy site I set up a table that would log every page request - data like url, time, script duration, etc. since that would be much easier to browse than server logs. When the log table was InnoDB the performance was moderate - mostly fine but sometimes the inserts took too much time. When I changed to MyISAM the performance improved a lot. But finally I achieved even better performance when instead of inserts I logged page requests to a file (by simple appending) and a cron job ran every 5 minutes and moved the data from the file to the db in one big chunk. The statistics of server load showed a noticeable decrease in I/O and the speed was very good.

So in the future if you run into performance issues with the updates you could consider this idea - keep saving visitor counts to a file or files in a simple format and then have a cron job to move them to the db. Obviously, this can only be used for non-critical data.

And there’s one more thing I want to add. While myisam updates may perform much faster I’m not sure if frequent updates to a myisam table is a 100% good solution in terms of reliability. When I used to log every page request immediately to a myisam table one day (after a few months) a strange thing happened: the table got locked because one of the inserts froze - it just didn’t want to end and the query was sitting indefinitely in the mysql process list. I don’t know why this happened, when I called the server admin he said this happens sometimes and he didn’t know why. Perhaps some unusual deadlock happened when multiple inserts were fired at the same time, some unfortunate coincidence of very rare conditions? Maybe this happened while a backup process was running in the background? Because this was so rare (happened once) I wasn’t able to reproduce it nor investigate what could go wrong.

The query process had to be killed and all was good again but this caused the site to be locked for some time. From then on I’m cautious about trusting myisam for frequent writes because I’m afraid of such rare and inexplicable occurrences. Now I only use bulk writes to myisam at certain intervals and all is fine.

Thanks for the tips, putting the visitor counts into a separate file and inserting them in batches is a new idea to think about. Luckily the visitor hits isn’t ultra-serious if something happens and it misses a few hits during a crash.

Do you know if the “MySQLi” (MySQL Improved) queries would improve this? Everything I have is programmed in the older “MySQL” extension and the shared MySQL server is version 5.1.56.

Also, is there any way to check to see if the MySQL server is getting overloaded (any particular variables or statuses to look at in phpMyAdmin) ?

Here are some of them marked in red over the course of 18 days:
Slow_queries 7,608 k
Innodb_buffer_pool_pages_dirty 491
Innodb_buffer_pool_reads 49 M
Innodb_log_waits 725
Innodb_row_lock_time_avg 115
Innodb_row_lock_time_max 51 k
Innodb_row_lock_waits 28 k
Handler_read_rnd 7,964 M
Handler_read_rnd_next 67 G
Qcache_lowmem_prunes 38 M

This is a shared MySQL server at Dreamhost, so I’m not sure if these statistics are for everyone on the server or for all of my combined sites.

It’s a good idea to move to mysqli but this will not improve performance.

If it’s slow then you know it’s overloaded :slight_smile: I’m not an expert on server tuning so I can’t really tell much by those values. Even if I could then it’s important to know the details of how the server is set up, what else is running there apart from mysql, what other services plus a whole lot of other system variables and hardware configurations. It’s best to talk to the admin of your server - however, I don’t know how much he can improve things on a shared host.