I've been going deeper into C++ recently and my bugs seem to get complex.
I have a vector of objects, each object contains a vector of floats. I decided I needed to create a further flat array containing all the float values of all objects in one. It's a little more complex than that but the gist of the problem is that as I loop through my objects extracting the float values, at some point my vector of objects is changed, or corrupted in some strange way. (My read operations are all const functions)
Another example was with MPI. I was just getting started so I just wanted to run the exact same code on two different nodes with their own memory and with no data transfer happening, all very simple. To my surprise I got segmentation errors and after hours tracking, I found that one assignment of one variable was setting an entirely different variable to NULL.
So I am curious, how is it possible that read operations can affect my data structures. Similarly how can a seemingly unrelated operation affect another. I couldn't expect solutions to my problems with those brief descriptions but any advice will be greatly appreciated.
Update: Here's a segment of the code, I didn't post originally because I am not sure how much can be extracted from it without understanding the whole system.
One thing I just found out though was that when I stopped assigning the value to my flat array and just cout'ed instead, the seg errors disappeared. So perhaps I am declaring my array wrong, but even if I was I'm not sure how it would affect the object vector.
void xlMasterSlaveGpuEA::FillFlatGenes() {
int stringLength = pop->GetGenome(0).GetLength();
for (int i=0;i<pop->GetPopSize();i++)
for (int j=0;j<stringLength;j++)
flatGenes[(i*stringLength)+j]<< pop->GetGenome(i).GetFloatGene(j);
}
float xlVectorGenome::GetFloatGene(unsigned int i) const {
return GetGene(i);
}
my flat array is a member function
float * flatFitness;
initailsed in the constructor like so:
flatFitness = new float(popSize);
Update 2:
I just want to point out that the two examples above are not related, the first one is not multi threaded. The second MPI example is technically, but MPI is distributed memory and I deliberately attempted the most simple implementation I could think of, which is both machines running code independently. There is however one extra detail, I put in a condtional saying
if node 1 then do bottom half of loop
if node 1 then do top half
Again the memory should be isolated, they should be working as if they know nothing about each other.. but removing this conditional and making both loops do all cubes, eliminates the error