views:

123

answers:

3

Hi there,

I have a csv DictReader object (using Python 3.1), but I would like to know the number of lines/rows contained in the reader before I iterate through it. Something like as follows...

myreader = csv.DictReader(open('myFile.csv', newline=''))

totalrows = ?

rowcount = 0
for row in myreader:
    rowcount +=1
    print("Row %d/%d" % (rowcount,totalrows))

I know I could get the total by iterating through the reader, but then I couldn't run the 'for' loop. I could iterate through a copy of the reader, but I cannot find how to copy an iterator.

I could also use

totalrows = len(open('myFile.csv').readlines())

but that seems an unnecessary re-opening of the file. I would rather get the count from the DictReader if possible.

Any help would be appreciated.

Alan

+5  A: 
rows = list(myreader)
totalrows = len(rows)
for i, row in enumerate(rows):
    print("Row %d/%d" % (i+1, totalrows))
J.F. Sebastian
Nice solution - I'm quite new to the idea of iterators, so I hadn't really appreciated enumerate() until now. Regards.
Alan Harris-Reid
Just be careful with your data set size here. Turning your reader into a list could take GOBS of memory.
Nick Bastin
+2  A: 

I cannot find how to copy an iterator.

Closest is itertools.tee, but simply making a list of it, as @J.F.Sebastian suggests, is best here, as itertools.tee's docs explain:

This itertool may require significant auxiliary storage (depending on how much temporary data needs to be stored). In general, if one iterator uses most or all of the data before another iterator starts, it is faster to use list() instead of tee().

Alex Martelli
You still have the potentially massive resource consumption with either method.
Nick Bastin
Thanks Alex - list it is then.
Alan Harris-Reid
+4  A: 

You only need to open the file once:

import csv

f = open('myFile.csv', 'rb')

countrdr = csv.DictReader(f)
totalrows = 0
for row in countrdr:
  totalrows += 1

f.seek(0)  # You may not have to do this, I didn't check to see if DictReader did

myreader = csv.DictReader(f)
for row in myreader:
  do_work

No matter what you do you have to make two passes (well, if your records are a fixed length - which is unlikely - you could just get the file size and divide, but lets presume that isn't the case). Opening the file again really doesn't cost you much, but you can avoid it as illustrated here. Converting to a list just to use len() is potentially going to waste tons of memory, and not be any faster.

Note: The 'Pythonic' way is to use enumerate instead of +=, but the UNPACK_TUPLE opcode is so expensive that it makes enumerate slower than incrementing a local. That being said, it's likely an unnecessary micro-optimization that you should probably avoid.

More Notes: If you really just want to generate some kind of progress indicator, it doesn't necessarily have to be record based. You can tell() on the file object in the loop and just report what % of the data you're through. It'll be a little uneven, but chances are on any file that's large enough to warrant a progress bar the deviation on record length will be lost in the noise.

Nick Bastin
Nick - thanks for the reply. Looks like my avoidance of re-opening the file is not worth the extra code involved (readability comes above performance in this case). Thanks for the tip regarding enumerate() speed. Tell() is also new to me - I'll look into it further. Regards.
Alan Harris-Reid