Inserting new data results in a deadlock

Hi,

I have a python script that executes 3 queries in 2 transactions on a MySQL database. Today I added an index to one of the used tables to resolve speed issues reported by the slow query log.
The problem is that the script fails quite fast since this change. It results in a deadlock:

DatabaseError: (1213, 'Deadlock found when trying to get lock; try restarting transaction')

This has happend before, but more like a few times per month, not 3 times in an hour (since the added index)
I have a few questions about this.

  • How can this index (which does make the search part of the query much faster) be of any negative influence on performance?
  • Does the index add extra load or locktime when you add data to the table?

I’ll do some research on this, but maybe you can help:

  • Because the two queries that deadlock are in different transactions. Does putting them in one transaction (for sequential excecution) change anything?

I’m a bit in the dark on this, so any help or suggestions are welcome.

Indexes have to be maintained, so yes, each index has its weight when adding data.
But if/why the new index causes the deadlock, I’ve got no idea.

Since the last deadlock I’ve removed the index again, see if the system can be more stable. I put the whole thing in debug mode and found the queries involved.

I suspect that because of the index (and the whole thing being much faster) the queries run ‘closer’ together and thus interfere more often.
The only annoying thing is that I don’t know how to exactly identify the table(s?) that give the deadlock. Is there a way to discover this?

The three queries use 4 tables, all use one table. And two of them share another. So between these three queries are quite a few combinations that could result in a deadlock.

You may want to try and run your SELECT queries so they don’t lock the tables (assuming you are using MyISAM and not InnoDB).

I would not change the transaction isolation level to read uncommitted. This basically means that if a session attempts to select from a table it will pick up both all values including those not yet committed as part of a transaction.

Instead of playing around with the transaction isolation level I would look more closely at your application and see where locks are being held on the tables and see if these need to remain on for the total duration of a transaction. In other words can the work be broken down into smaller transactions where they are either committed or rolled back which frees up the locks on the tables.

I would also check to see if you should be using MyISAM or InnoDB. MyISAM locks the entire table on INSERT/UPDATE/DELETE queries where InnoDB only locks the affected row. So if your app does a lot of inserts/updates you may want to use InnoDB, but if it’s mainly selects, MyISAM should be faster.

I’m not an expert on this, but that is my understanding.

MyISAM doesn’t support transactions, so I’m fairly certain that MyISAM won’t be in use

Most of the tables are InnoDB. At least the ones that I think could be involved at the moment.
All query’s are handeld by a python script with Django. I use django transactions for each query.


from django.db import connection, transaction
    cursor = connection.cursor()
    success = cursor.execute(sqlQuery, point)
    transaction.commit_unless_managed()

which is the last query in a the function saveGpsPoint. The traceback of the ‘crashed’ script is:

File "/home/freetrack/tcpserver/gpsparser.py", line 257, in __parseGps
    gpslibs.saveGpsPoint(point)
  File "/home/freetrack/tcpserver/gpslibs.py", line 75, in saveGpsPoint
    success = cursor.execute(sqlQuery, point)
  File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 40, in execute
    return self.cursor.execute(sql, params)
  File "/usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py", line 114, in execute
    return self.cursor.execute(query, args)
  File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 173, in execute
    self.errorhandler(self, exc, value)
  File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
    raise errorclass, errorvalue
DatabaseError: (1213, 'Deadlock found when trying to get lock; try restarting transaction')

The ‘point query’ in the traceback is also the last query displayed by de the debug. I’m not sure if I should look at that query or the next step taken by the script?

I got a bunch more debug data. I added the new index again, which speeds up everything and thus results in a deadlock within 15minutes.
The ‘point’ query is indeed involved. This is a INSERT INTO which needs to update all indexes on the table (I guess?)

Another query from the php website (not from the back-end python script) creates a tempory table which is filled with a subquery (SELECT) that uses a view which uses the an index in points table. This query holds the lock and thus the INSERT INTO wont work.

I don’t suppose it’s possible to give one source of query’s (the python script) preference above the requests comming from the php website?

I think the best step is to simplify the query that uses the create table and the view. Besides that, I’m out of ideas.