I am writing a python script that will be doing some processing on text files. As part of that process, i need to import each line of the tab-separated file into a local MS SQL Server (2008) table. I am using pyodbc and I know how to do this. However, I have a question about the best way to execute it.
I will be looping through the file, creating a cursor.execute(myInsertSQL) for each line of the file. Does anyone see any problems waiting to commit the statements until all records have been looped (i.e. doing the commit() after the loop and not inside the loop after each individual execute)? The reason I ask is that some files will have upwards of 5000 lines. I didn't know if trying to "save them up" and committing all 5000 at once would cause problems.
I am fairly new to python, so I don't know all of these issues yet.
Thanks.