It depends on if you have the resources to keep the larger array in memory (so basically, it depends on if you want only unique values to prevent it from bloating during the loop, or if you just need the final outcome to be an array of unique values.
For all examples, I assume you are getting the values to enter into the big array from some external source, like a MySQL query.
To prevent duplicates from being entered into the master array:
You could create two arrays, one with the values as a string, one with the values as actual array values.
while($row = $results->fetch_assoc) {
$value_string = implode("," $row);
if(in_array($value_string, $check_array) {
$check_array[] = $value_string;
$master_array[] = $row;
}
}
In the above, it just sees if the string version of your data set is in the array of string data sets already iterated through. You end up with a bigger overhead with two arrays, but neither ever gets duplicate values.
Or, as already mentioned, I'm sure, there is array_unique
, which happens after all data is entered. Modifying the above example, you get
while($row = $results->fetch_assoc) {
$master_array[] = $row;
}
$master_array = array_unique($master_array);