I have a Perl script, which parses datafile and writes 5 output files filled with 1100 x 1300 grid. The script works, but in my opinion, it's clumsy and probably non-efficient. The script is also inherited code, which I have modified a little to make it more readable. Still, it's a mess.
At the moment, the script reads the datafile(~4Mb) and puts it into array. Then it loops through array parsing its content and pushing values to another array and finally printing them to file in another for loop. If value is not found for certain point, then it prints 9999. Zero is an acceptable value.
The datafile has 5 different parameters and each of them is written to its own file.
Example of data:
data for the param: 2
5559
// (x,y) count values
280 40 3 0 0 0
280 41 4 0 0 0 0
280 42 5 0 0 0 0 0
281 43 4 0 0 10 10
281 44 4 0 0 10 10
281 45 4 0 0 0 10
281 46 4 0 0 10 0
281 47 4 0 0 10 0
281 48 3 10 10 0
281 49 2 0 0
41 50 3 0 0 0
45 50 3 0 0 0
280 50 2 0 0
40 51 8 0 0 0 0 0 0 0 0
...
data for the param: 3
3356
// (x,y) count values
5559 is the number of data lines to current parameter. Data line goes: x, y, number of consecutive x-values for that particular point and finally the values. There is an empty line between parameters.
As I said earlier, the script works, but I feel like this could be done so much easier and more efficiently. I just don't know how. So here's a chance for self-improvement.
What would be better approach to this problem, than a complicated combo of arrays and for-loops?
EDIT:
Should've been more clear on this, sorry.
Output is 1100 x 1300 grid filled with values read from data file. Each parameter is written to different file. More than one values on the data line means, that line has data for x(+n), y points.
UPDATE:
I tested the solution and to my surprise it was slower than original script (~3 seconds). However, the script is ~50% smaller, which makes it lots easier to actually understand what the script does. In this case that's more important than a 3-second speed gain.
Here some of the code from the older script. Hope you'll get the basic idea from it. Why is it faster?
for my $i (0..$#indata) { # Data file is read to @indata
...
if($indata[$i] =~ /^data for the param:/) {
push @block, $i; # data borders aka. lines, where block starts and ends
}
...
}
# Then handle the data blocks
for my $k (0..4) { # 5 parameters
...
if( $k eq '4') { # Last parameter
$enddata = $#indata;
}
else {
$enddata = $block[$k+1];
}
...
for my $p ($block[$k]..$enddata) { # from current block to next block
...
# Fill data array
for(my $m=0 ; $m<$n ; $m++){
$data[$x][$y] = $values[$m];
}
}
print2file();
}