tags:

views:

165

answers:

2

Hi all,

Please excuse my ignorance of Perl, as I'm a .NET developer (I can hear you all spitting and cursing already ;¬) )

We have a requirment to insert a BLOB (+ some other data) to an MS-SQL DB using a Perl script (our backend systems run on AIX using Perl scripts to go about thier daily business).

Our Perl guy, seemingly, ran into a problem: That the largest BLOB he could insert into the MS-SQL server (that us pesky .NETers use) was 64k but we need the Perl scripts to insert BLOBs of 2MB - 6MB.

If it is possible to insert BLOBs over 64k; could any one advise on an avenue of investigation that our Perl guy could persue in order to achive this?

Sorry for asking a question 'via proxy' but our Perl guru's on holiday.

Thanks in advance,

JD

+2  A: 

If this is DBI, you can set the maximum buffer of your database handle to your desired limit:

$dbh->{LongReadLen} = 10485760;  # 10 MB

From the DBI FAQ:

How do I handle BLOB data with DBI?

Handling BLOB data with the DBI is very straight-forward. BLOB columns are specified in a SELECT statement as per normal columns. However, you also need to specify a maximum BLOB size that the database handle can fetch using the LongReadLen attribute.

For example:

### $dbh is a connected database handle
$sth = $dbh->prepare( "SELECT blob_column FROM blobby_table" );
$sth->execute; would fail.

### $dbh is a connected database handle
### Set the maximum BLOB size...
$dbh->{LongReadLen} = 16384;        ### 16Kb...Not much of a BLOB!
$sth = $dbh->prepare( "..." );

would succeed provided no column values were larger than the specified value.

If the BLOB data is longer than the value of LongReadLen, then an error will occur. However, the DBI provides an additional piece of functionality that will automatically truncate the fetched BLOB to the size of LongReadLen if it is longer. This does not cause an error to occur, but may make your fetched BLOB data useless.

This behaviour is regulated by the LongTruncOk attribute which is defaultly set to a false value ( thus making overlong BLOB fetches fail ).

### Set BLOB handling such that it's 16Kb and can be truncated
$dbh->{LongReadLen} = 16384;
$dbh->{LongTruncOk} = 1;

Truncation of BLOB data may not be a big deal in cases where the BLOB contains run-length encoded data, but data containing checksums at the end, for example, a ZIP file, would be rendered useless.

Zaid
Thank's Zaid. Does the same process (the setting of LongReadLen) effect the insertion size of a BLOB for an insert also?
jdoig
@jdoig : Good point. `LongReadLen` won't affect insertions. Only selections. To quote the documentation : `...there are often limits on the maximum size of an INSERT statement and the quote method generally can't cope with binary data.` I'd say your best bet is to manually change the database settings if possible.
Zaid
Thanks Zaid, the DB can handle the BLOB as we read and write them to it from our .NET code. The problem may be down to the connection method(?? I have no idea about the DBI api) or just some error in the script.
jdoig
@jdoig : See if you can increase the size of your `INSERT` buffer through the DB settings.
Zaid
A: 

To insert a BLOB, you usually need to use placeholders. Prepare your statement with a placeholder, then bind the value you want to insert as follows:

$statement->bind_param(??, $myvalue, DBI::SQL_BLOB);

Where ?? is the position of the placeholder, starting at 1. Then you execute the statement as usual, e.g.:

$statement->execute();

As always, error handling is a good plan. We use this routinely with SQLite, but I'm pretty sure this is also correct for MSSQL and MySQL.

morungos