I have some data that I need to access via a select query. However to get at it, I wold need to do about 4 inner joins on an id.
It is certainly simpler to just plug in one or two fields that are then repeated elsewhere in the database at the point of data entry (they are readily accessible at the point of data entry in the form of a php array).
However I worry that this means that the same fields (data) will be replicated in different tables.
However the select seems a lot simpler than a deep dive join between four tables on an id.
Is that a big no no? Is the select more efficient than a highly complex join?
Like @benanamen, get the concept of database normalization down before you get too far down a rabbit hole. It will save you pain later.
This is a nice straightforward explanation of database normalization, and has links down to third normal form, which for MOST cases is sufficient - there are times you need to go further, but it’s a very limited case use.
I’m not sure I’m understanding the question. Maybe it’s something like “Is assigning PHP values from multiple simple SELECT queries, and then using PHP to work with excess values, more or less efficient than assigning the PHP values from a more complex SELECT query that would need PHP to do less work with?”
Writing complex queries that are efficient use of the database is something that comes with practice and experience.
For example, it may be easy enough to have code for “get me these 100 things from the database, then have PHP return these 10 things from those results”. But when the number of results gets large, there will be more wasted PHP resource use. At some point it will be obviously more efficient to “get me these 10 things from the database, then have PHP return those 10 things”.
Without seeing examples of your queries or the table schemas there can be only guesses, but if the fields that are of most importance are indexes your complex SELECT queries should run more efficiently than if they are not indexes. Full table scans can be slow.