tags:

views:

38

answers:

2

Hello,

The shell script needs to read batch of lines (lets say first 150). then storing that values in variable. After the execution of the SQL, now it should read from 151 to 300. In this way, it will read lines in batch of 150 and storing it in a variable till 32000 lines are read.

Script Looks Like

   #!/bin/bash  
    t2val=$(cat /home/trnid | tr '\n' ',' | sed 's/,$//')  
    sqlplus -s <username>/<passwd>@dbname  > /home/file << EOF  
    set echo on  
    set pagesize 0  
    set verify off
    set lines 32000
    set trimspool on
    set feedback off
   SELECT *  
  FROM <dbname>.<tablename1> tr  
  LEFT JOIN <tablename2> t2 ON t2.id2 = tr.id1  
  LEFT JOIN <tablename3> t3 ON t3.id2 = tr.id1  
  LEFT JOIN <tablename4> t4 ON t4.id2 = tr.id1  
  WHERE tr.TIMESTAMP > SYSDATE - 75 / 1440  
  AND tr.TIMESTAMP <= SYSDATE - 15 / 1440  
  and t2.value in ( "t2val")  
  order by timestamp; 
   exit;
  EOF

trnid has 32000 lines with each line containing 11 digits number. here the variable t2val has 32000 values separated by commas. But the script doesn't run with large value. So, i need to divide it on less than 2499 chars and hence around 150 entries to be taken from trnid file. I want to pick them in batch of 150 and then putting it in t2val column (separated by commas) until i reach 32000th line. I think in this way, if you have netter idea to take values of column t2val < 2499, please advise. OR help me the way i think ( executing batch of 150 entries in SQL).

+1  A: 

Option 1)

Use split to create a set of temporary files from /home/trnid Then iterate and cat etc on those temporary files. (Except that you don't need to use cat, and can simply redirect into tr)

Option 2)

Use head and tail to get sections of lines of the file in turn, instead of using cat.

Option 3)

Rewrite in a proper language to avoid the overhead of all those extra processes.

Douglas Leeder
+2  A: 

It looks like you're storing those values in a ORacle DB. Why not use the bulk loader?

Another solution would be to use perl

pseudocode without error handling:

open (FH, "<filename");
my $count = 0;
my $buffer = ();
while(<FH>) {
  push($buffer,split);
  $count++;
  if($count % 150 == 0) {
    saveToDB($buffer);
    $buffer = ();
  }
}

where saveToDB can easyily store the values in DB, eg via DBI

ovm