Hello all,
I have a question about how to best approach a performance issue I am having. I inherited a database schema that I need to work with. The tables look like this:
tablename_1, MyISAM(1.4 million rows)
tablename_2, MyISAM(1.4 million rows)
tablename_3, MyISAM(1.4 million rows)
…and so on. These tables are dynamically built by an application and the _1 (for reference, lets call it identifier) piece of the table name is incrementing. They data is not changed once stored. My task is to write an application that will query the different tables based on identifier. So in my query, I will get what value to use for the identifier (ex: 1,2 etc) and I will need to query tablename_1 or tablename_2.
I created a procedure with the only solution that came to mind, which is CONCAT the different pieces of the SQL query, PREPARE, and EXECUTE the query. This is proving to be VERY costly when it comes to performance. The application I am writing needs to seek data from this table at a minimum of 800 transactions per second.
Currently, when I hardcode the tablename_1 in my SQL query and execute, I am getting around 1500 per second. When I use the CONCAT method, it slows down drastically. Is there any way to overcome this obstacle? If the DB schema needs to be changed (which is in consideration), what would be a good way to store around 20 million records? Keep in mind, it would need to be searched at with a minimum of 800 per second.
Thanks for your time & all help! Please let me know if any other details are needed. I was trying to keep the post short and to the point.