I found this query online as a way to show all processes that need to be killed. I tweaked it to reflect my table names and ran it, but it produced nothing. This despite over 70 connections right now are preventing any more queries from being executed.
SELECT CONCAT('KILL ',id,';') AS Kill_List FROM information_schema.processlist WHERE user='recordau_admin' AND info = 'SELECT * FROM processlist';
What am I doing wrong with the above query? I guess a better question is why am I having this problem while executing queries from my iMac.
I usually write my queries more verbose than they need to be, but even rewritten that query looks wonky to me.
SELECT CONCAT('KILL ', information_schema.processlist.id, ';') AS Kill_List
FROM information_schema.processlist
WHERE information_schema.processlist.user='recordau_admin'
AND information_schema.processlist.info = 'SELECT * FROM information_schema.processlist';
Iâm not seeing how the field âinfoâ could have a value equal to a result set.
What is your criteria for determining which processes should be âkilledâ and which should not?
A string is a string, regardless of if it could be evaluated as something else, and as a WHERE condition, it acts like any other string.
What heâs trying to do with that query as written is inspect the currently connected clients and see if any of them is running that particular query against the database (which is what i assume info gets filled in with while a client is executing a query). Whether thatâs what he WANTS to do, is a different question.
What Iâm trying to do is nuke any processes that are building up. Because when there is about 50 of these, I am prevented from running any more queries in my database client app.
Why do these processes usually only become a problem when Iâm running queries myself? It doesnât seem to be an issue from day to day on my website front end, with hundreds of people logging in and conducting keyword searches all the time.
Might these queries be extremely inefficient and resource intensive? For example, a complex JOIN query that does full table scans of multiple large tables could be taking a long time to run.
Can you post an example of a query you are ârunning yourselfâ?
Leading wildcard matches can be expensive, but Iâm assuming there is no easy way to get around that. Do you really need â*â everything? If you only need the id from the table SELECTing that only could help, especially if itâs indexed.
You may be able to save some with some type of subquery so that rows that have already had the keyword found arenât included?
Here is a typical result for a keyword search from the first query. I didnât think it was that resource intense, considering that it doesnât return very many columns. But maybe Iâm wrong.
Sorry. I had this topic bookmarked so I could get back to it but got sidetracked with New Years stuff.
I donât see any time (as in how long it took to run) in the screenshot but the null âkeysâ canât be helping any. Are all the âtopicâ fields (eg. the one with âarterial-plaqueâ unique? Maybe after being sure to save a backup you could try
CREATE INDEX {some name} ON topics (topic)
Then to use it query like
SELECT * FROM topics
USE INDEX ({some name})
WHERE topic LIKE '%plaque%'
* where â{some name}â is some name that makes sense to you eg. topic_index
Check the times to see if it helps.