How can I verify the chassis_row, and chassis_col are not the same in any of the indexes?
he 0, and 1 both have the same values (chassis_row, and chassis_col), would this be the right line of thinking…
Not sure what the $asset+1 stuff is about. Maybe you are a C/C++ dev? You could loop through and record each chassis_row and chassis_col in an “already seen” list and then compare each asset to the list to see if it is found already. This is assuming of course that chassis_row 1 and chassis_col 1 could be seen as a pair on index 7 and not necessarily right after the original pair.
You are basically restructuring the array to use the chassis_row as the primary key and the chassis_col as the secondary key and an OPEN key[] to hold array values. This restructuring will look more like this.
You will notice that in the second double foreach section as we loop through the chassis_row and chassis_col we then do a count of the array under these keys looking for a count greater then 1 if(count($newdata[$k][$k2]) > 1): (like shown in the array above) and IF found place these arrays into a $collision array, shown below.
Why loop twice? You’ve already got the mechanism to detect a collision inside the first loop (if $newdata[$assetArray[$k]['chassis_row']][$assetArray[$k]['chassis_col']] already exists, there’s a collision…)
I agree with your logic but how would you grab the values of all colliding items? I guess you could say “the one I already have is an item. THIS current item is another” but would I add it to to $newdata array? …and what if there is another match found later, It would need to be handled differently as you can’t say “the one I already have is an item.”. There would be too many IF conditions where a structured sort and a count can give you all items.
Where is this data coming from, because if there can only be one entry per row/column combination, you should prevent the duplicate from being inserted into wherever (array, database table) in the first place?
If this data is being stored in a database table, your database design must enforce uniqueness, by defining the two columns as a composite unique index. You would then just attempt to insert/update the data and detect if the query produced a duplicate index error.