views:

607

answers:

5

Hi all,

I have a queries that reside in multiple methods each (query) of which can contain multiple paramaters. I am trying to reduce file size and linecount to make it more maintainable. Below is such an occurence:

$sql_update = qq { UPDATE database.table
                    SET column = 'UPDATE!'
                   WHERE id = ?
                };

$sth_update = $dbh->prepare($sql_update);

if ($dbh->err) {
    my $error = "Could not prepare statement. Error: ". $dbh->errstr ." Exiting at line " . __LINE__;
    print "$error\n";
    die;
}

$sth_rnupdate->execute($parameter);

if ($dbh->err) {
    my $error = "Could not execute statement. Error: ". $dbh->errstr ." Exiting at line " . __LINE__;
    print "$error\n";
    die;
}

This is just one example, however, there are various other select examples that contain just the one parameter to be passed in, however there is also some with two or more parameters. I guess I am just wondering would it be possible to encapsulate this all into a function/method, passs in an array of parameters, how would the parameters be populated into the execute() function?

If this was possible I could write a method that you simply just pass in the sql query and parameters and get back a reference to teh fetched records. Does this sound safe at all?

All opinions and thoughts are much appreciated.

Regards, BorisTheBulletDodgingDodger

---------------------------- [RESPONSE TO BUBAKER] ----------------------------------- Sorry I mis-interpreted what the function was doing. Everything is okay now!

+5  A: 

If line-count and maintainable code is your only goal, your best bet would be to use any one of the several fine ORM frameworks/libraries available. Class::DBI and DBIx::Class are two fine starting points. Just in case, you are worried about spending additional time to learn these modules - dont: It took me just one afternoon to get started and productive. Using Class::DBI for example your example is just one line:

Table->retrieve(id => $parameter)->column('UPDATE!')->update;

The only down-side (if that) of these frameworks is that very complicated SQL statements required writing custom methods learning which may take you some additional time (not too much) to get around.

Gurunandan
This sounds very beneficial. Does the Class::DBI module come with core perl? (for unix)
No. You will have to install it. Typing "install Class::DBI" and responding with a 'y' to all dependencies inside a CPAN shell will install it for you.
Gurunandan
+2  A: 

The "execute" function does accept an array containing all your parameters.

You just have to find a way to indicate which statement handle you want to execute and you're done ...

It would be much better to keep your statement handles somewhere because if you create a new one each time and prepare it each time you don't really rip the benefits of "prepare" ...

About returning all rows you can do that ( something like "while fetchrow_hashref push" ) be beware of large result sets that coudl eat all your memory !

siukurnin
I see what you're saying - I will have to explore this a small bit. All my statements mimic the code I have specified above but the only thing that is diverse are the paramaters and the statement itself.
+2  A: 

Here's a simple approach using closures/anonymous subs stored in a hash by keyword name (compiles, but not tested otherwise), edited to include use of RaiseError:

# define cached SQL in hash, to access by keyword
#
sub genCachedSQL {
  my $dbh = shift;
  my $sqls = shift;  # hashref for keyword => sql query
  my %SQL_CACHE;
  while (my($name,$sql) = each %$sqls) {
     my $sth = $dbh->prepare($sql);
     $SQL_CACHE{$name}->{sth} = $sth;

     $SQL_CACHE{$name}->{exec} = sub {  # closure for execute(s)
        my @parameters = @_;
        $SQL_CACHE{$name}->{sth}->execute(@parameters);

        return sub {  # closure for resultset iterator  - check for undef
           my $row; eval { $row = $SQL_CACHE{$name}->{sth}->fetchrow_arrayref(); };
           return $row;
        }  # end resultset closure

     }  # end exec closure

  }  # end while each %$sqls

  return \%SQL_CACHE;

}  # end genCachedSQL


my $dbh = DBI->connect('dbi:...', { RaiseError => 1 });

# initialize cached SQL statements
#
my $sqlrun = genCachedSQL($dbh,
 {'insert_table1' => qq{ INSERT INTO database.table1 (id, column) VALUES (?,?) },
  'update_table1' => qq{ UPDATE database.table1 SET column = 'UPDATE!' WHERE id = ? },
  'select_table1' => qq{ SELECT column FROM database.table1 WHERE id = ? }});

# use cached SQL
#
my $colid1 = 1;
$sqlrun->{'insert_table1'}->{exec}->($colid1,"ORIGINAL");
$sqlrun->{'update_table1'}->{exec}->($colid1);
my $result = $sqlrun->{'select_table1'}->{exec}->($colid1);
print join("\t", @$_),"\n" while(&$result());

my $colid2 = 2;
$sqlrun->{'insert_table1'}->{exec}->($colid2,"ORIGINAL");

# ...
bubaker
This is excellent. I will try implementing this solution a little later on and let you know how I get on.
Glad to contribute - definitely make sure to test how the eval behaves for bad execute test cases
bubaker
Quick question Bubaker,This query executed a fetchall_arrayref() which result in fetching ALL the records, which is problematic as my query is returning in excesss of 300,000 records with an average of around 9 columns.Is there a way to make this query utilize the fetchrow_arrayref() or fetchrow_hashref to iterate through the individual records?
Got it sorted, I simply passed back the reference to the sth:my $data; eval { $data = $SQL_CACHE{$name}->{sth} };And everything works fine!
Boris - you already have the sth_ref in $sqlrun->{'select_table1'}->{sth}, but you're right that all results rows get returned. For maintainability, I'd rewrite the exec closure to return a results iterator (another closure).
bubaker
+1  A: 

I'm very impressed with bubaker's example of using a closure for this.

Just the same, if the original goal was to make the code-base smaller and more maintainable, I can't help thinking there's a lot of noise begging to be removed from the original code, before anyone embarks on a conversion to CDBI or DBIC etc (notwithstanding the great libraries they both are.)

If the $dbh had been instantiated with RaiseError set in the attributes, most of that code goes away:

$sql_update = qq { UPDATE database.table
                   SET column = 'UPDATE!'
                   WHERE id = ?
              };
$sth_update = $dbh->prepare($sql_update);
$sth_update->execute($parameter);

I can't see that the error handling in the original code is adding much that you wouldn't get from the vanilla die produced by RaiseError, but if it's important, have a look at the HandleError attribute in the DBI manpage.

Furthermore, if such statements aren't being reused (which is often the main purpose of preparing them, to cache how they're optimised; the other reason is to mitigate against SQL injection by using placeholders), then why not use do?

$dbh->do($sql_update, \%attrs, @parameters);
RET
+4  A: 

No sense in checking for errors after every single database call. How tedious!

Instead, when you connect to the database, set the RaiseError option to true. Then if a database error occurs, an exception will be thrown. If you do not catch it (in an eval{} block), your program will die with a message, similar to what you were doing manually above.