I have an app that does an iteration to create points on a graph over time. While I'm gathering data for each point across the x-axis I also must execute a recursive lookup which effectually means I have a loop inside another loop. This is not scaling too well. I don't see a lot of examples of using a "divide and conquer" solution on iterations. I was thinking of using Java's Executor concurrency framework to run each loop in it's own thread, await the answers, gather the results and return them. The initial test results I'm getting don't seem that much faster. I know I should show some code but what I want to know first is if this approach has merit compared to better methods that I may not be familiar with. Thanks in advance!
Adding some groovyish/javaish pseudo code to assist thinking about this:
class Car {
id
model
make
weight
}
for (number in listOfImportantCarIDs) {
Car car = carsMap.get(number) // find the car we care about
String maker = car.make //get it's 'parent'
// get amount of all related cars
Iterator<Car> allcars = carsMap.values().iterator();
while (allcars.hasNext()) {
Car aCar = alldocs.next();
if (maker.equals(aCar.make)) {
totalCarCount++; // increment total related cars
BigDecimal totalWeightofAllCars = totalWeightofAllCars.add(aCar.getWeight()); // add weight to total
// a ghetto cache to prevent double counting
countedMaufacturers.add(make);
}
}
}