Database design for huge amounts of data, flat or relational?

Hi Everyone,

I have a question that I am hoping some MySql expert might be able to help me out with.

Current Design:

Currently I have 4 tables that are relevant to this question, they shall be called tables T, R, and RR, and RRF.

For every entry into Table T, there can be up to 1 million entries into table R, 4 Million entries into table RR, and 40 Million entries into table RRF.

Ideal Usage:
I must be able to query against primary keys(ints) in each table and fetch results in sub-second times(prefered).

I would like to not see degredation in the time it takes to insert into any of the tables
Question: What is the best way to handle this?


  1. Flatten the RRF table into the RR table? I could perhaps store the RRF data as XML. I still have the problem of the rapidly growing RR table but this is not nearly as rapid as the RRF table. I also lose query capability against the rrf table which is not good.
  2. Horizontally partition the RR and RRF tables into chunks of x Million. I’m not too familiar with this approach…what kind of performance hit would I take if I had a 1 billion row table partitioned into chunks of 10 million?
  3. Archive the data at certain time-periods.
  4. Other?

I appreciate your interest and advice.

Thank You,

what evidence do you have that the proper indexes on those tables for the queries you are running are inadequate?

or are you trying to optimize a problem that doesn’t exist yet?


This. I’m trying to start off with a good design. I know that these tables will grow huge with my current design and I’m looking for insight as to how experts have handled similar situations…or if this is even an issue?