I have some very large myISAM tables. It appears that when I do a query that returns a large recordset, the tables that i’m retrieving them from are locked while the data is being sent from the server to the client. I did some digging on google & yahoo and the best reference I could find was at the following link.

I would really prefer not to switch to InnoDB but if that’s the only way to avoid table wide locking then I will likely have to. Does anybody have any suggestions? Thanks.

MyISAM is faster.

Currently, MySQL supports table-level locking for ISAM, MyISAM, and MEMORY (HEAP) tables, page-level locking for BDB tables, and row-level locking for InnoDB tables.

7.3.2. Table Locking Issues

To achieve a very high lock speed, MySQL uses table locking (instead of page, row, or column locking) for all storage engines except InnoDB and BDB.

For InnoDB and BDB tables, MySQL only uses table locking if you explicitly lock the table with LOCK TABLES. For these table types, we recommend you to not use LOCK TABLES at all, because InnoDB uses automatic row-level locking and BDB uses page-level locking to ensure transaction isolation.

For large tables, table locking is much better than row locking for most applications, but there are some pitfalls.

Table locking enables many threads to read from a table at the same time, but if a thread wants to write to a table, it must first get exclusive access. During the update, all other threads that want to access this particular table must wait until the update is done.

Table updates normally are considered to be more important than table retrievals, so they are given higher priority. This should ensure that updates to a table are not ``starved’’ even if there is heavy SELECT activity for the table.

Table locking causes problems in cases such as when a thread is waiting because the disk is full and free space needs to become available before the thread can proceed. In this case, all threads that want to access the problem table are also put in a waiting state until more disk space is made available.

Table locking is also disadvantageous under the following scenario:

A client issues a SELECT that takes a long time to run.

Another client then issues an UPDATE on the same table. This client waits until the SELECT is finished.

Another client issues another SELECT statement on the same table. Because UPDATE has higher priority than SELECT, this SELECT waits for the UPDATE to finish. It also waits for the first SELECT to finish!

The following list describes some ways to avoid or reduce contention caused by table locking:

Try to get the SELECT statements to run faster. You might have to create some summary tables to do this.

Start mysqld with --low-priority-updates. This gives all statements that update (modify) a table lower priority than SELECT statements. In this case, the second SELECT statement in the preceding scenario would execute before the INSERT statement, and would not need to wait for the first SELECT to finish.

You can specify that all updates issued in a specific connection should be done with low priority by using the SET LOW_PRIORITY_UPDATES=1 statement. See Section 13.5.3, “SET Syntax”.

You can give a specific INSERT, UPDATE, or DELETE statement lower priority with the LOW_PRIORITY attribute.

You can give a specific SELECT statement higher priority with the HIGH_PRIORITY attribute. See Section 13.1.7, “SELECT Syntax”.

Starting from MySQL 3.23.7, you can start mysqld with a low value for the max_write_lock_count system variable to force MySQL to temporarily elevate the priority of all SELECT statements that are waiting for a table after a specific number of inserts to the table occur. This allows READ locks after a certain number of WRITE locks.

If you have problems with INSERT combined with SELECT, switch to using MyISAM tables, which support concurrent SELECT and INSERT statements.

If you mix inserts and deletes on the same table, INSERT DELAYED may be of great help. See Section, “INSERT DELAYED Syntax”.

If you have problems with mixed SELECT and DELETE statements, the LIMIT option to DELETE may help. See Section 13.1.1, “DELETE Syntax”.

Using SQL_BUFFER_RESULT with SELECT statements can help to make the duration of table locks shorter. See Section 13.1.7, “SELECT Syntax”.

You could change the locking code in mysys/thr_lock.c to use a single queue. In this case, write locks and read locks would have the same priority, which might help some applications.

Here are some tips about table locking in MySQL:

Concurrent users are not a problem if you don’t mix updates with selects that need to examine many rows in the same table.

You can use LOCK TABLES to speed up things (many updates within a single lock is much faster than updates without locks). Splitting table contents into separate tables may also help.

If you encounter speed problems with table locks in MySQL, you may be able to improve performance by converting some of your tables to InnoDB or BDB tables. See Chapter 15, The InnoDB Storage Engine. See Section 14.4, “The BDB (BerkeleyDB) Storage Engine”.

You can find more informatioin about your question here:

what exactly are you trying to do while you are doing the large selects? inserts or updates? please post the table structure, the select statement and the other query that is waiting for the read lock to be released.

Thanks mysqlautobackup. Longneck - I’m trying to do both inserts and updates during the large selects. I can’t post the exact SQL here unfortunately as this is work related and I’m not sure whether revealing db table structure is kosher and all. Anyhow I guess I was wondering if there was any sort of noLock feature similar to that which is provided by MS SQL server, but from everything i’ve looked at it’s not.

Thanks for your help!

ok, then you need innodb. why do you not want to switch to innodb?

also, you could set up slave replication. then you can point all of your select queries to the slave server, and all of your writes to the master. since writes never occur directly on the slave, you won’t run in to the locking problem. FYI- nothing says the slave has to be a physically separate server. you can use the same server, but the data has to be in separate directories.

That’s def. a great idea…i’ll have to see about replication. I’m trying to avoid switching because I’ve heard that myISAM is faster for reads. I also don’t need support for transactions. One more reason I don’t want to change from myISAM to InnoDB is going to sound foolish…but I really like being able to copy tables from one DB to another by just copying the files from one directory to another :slight_smile:

you should set up a test server with actual data in two databases, one with myisam tables and the other with innodb tables. then do some speed tests.

I also don’t need support for transactions.

then turn them off.

One more reason I don’t want to change from myISAM to InnoDB is going to sound foolish…but I really like being able to copy tables from one DB to another by just copying the files from one directory to another :slight_smile:

you can do that with tablespaces.


Thanks for the link! That’s great.

One useful tidbit that you probably already know but helped speed up the SELECT: suppose myTable is a large table with, let’s say, a million records and the primary key values run from 1 to 1,000,000 and I want to get the last 20,000 records. In this case Query 1 below is much slower than query 2 assuming that you have an index on PKColumn.

Query 1:

SELECT myColumn
FROM myTable
LIMIT 980000,20000

Query 2

SELECT myColumn
FROM myTable
WHERE PKColumn>980000 AND PKColumn<1000000

I guess query 1 is much slower since it can’t use the index to find where row 980,000 is since there’s no guarantee that row 980,000 has primary key value 980,000

you are correct because in query 1, mysql still has to process all of the records up to the 980000’th before it can decide to start outputing records.