views:

89

answers:

3

I have a Perl script, which parses datafile and writes 5 output files filled with 1100 x 1300 grid. The script works, but in my opinion, it's clumsy and probably non-efficient. The script is also inherited code, which I have modified a little to make it more readable. Still, it's a mess.

At the moment, the script reads the datafile(~4Mb) and puts it into array. Then it loops through array parsing its content and pushing values to another array and finally printing them to file in another for loop. If value is not found for certain point, then it prints 9999. Zero is an acceptable value.

The datafile has 5 different parameters and each of them is written to its own file.

Example of data:

data for the param: 2
5559
// (x,y) count values
280 40 3  0 0 0 
280 41 4  0 0 0 0 
280 42 5  0 0 0 0 0 
281 43 4  0 0 10 10 
281 44 4  0 0 10 10 
281 45 4  0 0 0 10 
281 46 4  0 0 10 0 
281 47 4  0 0 10 0 
281 48 3  10 10 0 
281 49 2  0 0 
41 50 3  0 0 0 
45 50 3  0 0 0 
280 50 2  0 0 
40 51 8  0 0 0 0 0 0 0 0
...

data for the param: 3
3356
// (x,y) count values

5559 is the number of data lines to current parameter. Data line goes: x, y, number of consecutive x-values for that particular point and finally the values. There is an empty line between parameters.

As I said earlier, the script works, but I feel like this could be done so much easier and more efficiently. I just don't know how. So here's a chance for self-improvement.

What would be better approach to this problem, than a complicated combo of arrays and for-loops?

EDIT:

Should've been more clear on this, sorry.

Output is 1100 x 1300 grid filled with values read from data file. Each parameter is written to different file. More than one values on the data line means, that line has data for x(+n), y points.

UPDATE:

I tested the solution and to my surprise it was slower than original script (~3 seconds). However, the script is ~50% smaller, which makes it lots easier to actually understand what the script does. In this case that's more important than a 3-second speed gain.

Here some of the code from the older script. Hope you'll get the basic idea from it. Why is it faster?

 for my $i (0..$#indata) { # Data file is read to @indata
 ...
   if($indata[$i] =~ /^data for the param:/) { 
     push @block, $i;  #  data borders aka. lines, where block starts and ends
   }
 ...
 }
  # Then handle the data blocks
 for my $k (0..4) {  # 5 parameters
 ...
   if( $k eq '4') {  # Last parameter
     $enddata = $#indata;
   }
   else {
     $enddata = $block[$k+1];
   }
    ...
   for my $p ($block[$k]..$enddata) { # from current block to next block 
    ...
   # Fill data array
    for(my $m=0 ; $m<$n ; $m++){
    $data[$x][$y] = $values[$m];
     }

   }
   print2file();

 }
A: 

Perl supports multidimentional arrays if you use references.

my $matrix = [];
$matrix->[0]->[0] = $valueAt0x0;

So you could read the entire thing in one go

$matrix = [];
while($ln = <INPUT>) {
  @row = split(/ /, @ln); #assuming input is separated by spaces
  push(@$matrix, \@row);
}
# here you read matrix.  Let's print it
foreach my $row (@$matrix) {
  print join(",", @{$row}) . "\n";
}
# now you pruinted your matrix with "," as a separator

Hope this helps.

krico
A: 

Since you don't describe your desired output, it's impossible to know what to write to the files. But this does the reading part in a pretty flexible way. You could probably micro-optimize the number of regular expressions or lose the use of the implicit topic variable $_ for improved legibility. If you are willing to commit to a certain output format for each cell of the matrix before calling flush_output (such as "all values joined by commas"), then you can do away with the innermost layer of arrays and just do $matrix[$x][$y] .= ($matrix[$x][$y] ? ',' : '') . join(',', @data); or something similar and less obscure.

use strict;
use warnings;

my $cur_param;
my @matrix;
while (<DATA>) {
  chomp;
  s/\/\/.*$//;
  next if /^\s*$/;

  if (/^data for the param: (\d+)/) {
    flush_output($cur_param, \@matrix) if defined $cur_param;
    $cur_param = $1;
    @matrix = (); # reset
    # skip the line with number of rows, we're smarter than that
    my $tmp = <DATA>;
    next;
  }

  (my $x, my $y, undef, my @data) = split /\s+/, $_;
  $matrix[$x][$y] ||= [];
  push @{$matrix[$x][$y]}, @data;
}

sub flush_output {
  my $cur_param = shift;
  my $matrix = shift;
  # in reality: open file and dump
  # ... while dumping, do an ||= [9999] for the default...

  # here: simple debug output:
  use Data::Dumper;
  print "\nPARAM $cur_param\n";
  print Dumper $matrix;
}

__DATA__
data for the param: 2
5559
// (x,y) count values
280 40 3  0 0 0 
280 41 4  0 0 0 0 
280 42 5  0 0 0 0 0 
281 43 4  0 0 10 10 
281 44 4  0 0 10 10 
281 45 4  0 0 0 10 
281 46 4  0 0 10 0 
281 47 4  0 0 10 0 
281 48 3  10 10 0 
281 49 2  0 0 
41 50 3  0 0 0 
45 50 3  0 0 0 
280 50 2  0 0 
40 51 8  0 0 0 0 0 0 0 0

data for the param: 3
3356
// (x,y) count values
tsee
+1  A: 

The following will fill in a sparse array in a hash. When printing, print 9999 for cells with undefined values. I changed the code to build each row as a string to reduce the memory footprint.

#!/usr/bin/perl

use strict; use warnings;
use YAML;

use constant GRID_X => 1100 - 1;
use constant GRID_Y => 1300 - 1;

while (my $data = <DATA> ) {
    if ( $data =~ /^data for the param: (\d)/ ) {
        process_param($1, \*DATA);
    }
}

sub process_param {
    my ($param, $fh) = @_;
    my $lines_to_read = <$fh>;
    my $lines_read = 0;

    $lines_to_read += 0;

    my %data;

    while ( my $data = <$fh> ) {
        next if $data =~ m{^//};
        last unless $data =~ /\S/;
        $lines_read += 1;

        my ($x, $y, $n, @vals) = split ' ', $data;

        for my $i ( 0 .. ($n - 1) ) {
            $data{$x + $i}{$y} = 0 + $vals[$i];
        }
    }
    if ( $lines_read != $lines_to_read ) {
        warn "read $lines_read lines, expected $lines_to_read\n";
    }

    # this is where you would open a $param specific output file
    # and write out the full matrix, instead of printing to STDOUT
    # as I have done. As an improvement, you should probably factor
    # this out to another sub.

    for my $x (0 .. GRID_X) {
        my $row;
        for my $y (0 .. GRID_Y) {
            my $v = 9999;
            if ( exists($data{$x})
                    and exists($data{$x}{$y})
                    and defined($data{$x}{$y}) ) {
                $v = $data{$x}{$y};
            }
            $row .= "$v\t";
        }
        $row =~ s/\t\z/\n/;
        print $row;
    }

    return;
}


__DATA__
data for the param: 2
5559
// (x,y) count values
280 40 3  0 0 0 
280 41 4  0 0 0 0 
280 42 5  0 0 0 0 0 
281 43 4  0 0 10 10 
281 44 4  0 0 10 10 
281 45 4  0 0 0 10 
281 46 4  0 0 10 0 
281 47 4  0 0 10 0 
281 48 3  10 10 0 
281 49 2  0 0 
41 50 3  0 0 0 
45 50 3  0 0 0 
280 50 2  0 0 
40 51 8  0 0 0 0 0 0 0 0
Sinan Ünür
Thanks. This is nice and simple solution. Present script loops through array finding the data boundaries for each parameter. The grid is filled for each parameter in for-loop based on those boundaries. That's lots of looping! The writing to file is almost the same as in your solution.
Veivi