Say the source data comes in excel format, below is how I import the data.
- Converting to csv format via MS Excel
- Roughly find bad rows/columns by inspecting
- backup the table that needs to be updated in SQL Query Analyzer
- truncate the table (may need to drop foreign key constraint as well)
- import data from the revised csv file in SQL Server Enterprise Manager
- If there's an error like duplicate columns, I need to check the original csv and remove them
I was wondering how to make this procedure more effecient in every step? I have some idea but not complete. For step 2&6, using scripts that can check automatically and print out all error row/column data. So it's easier to remove all errors once. For step 3&5, is there any way to automatically update the table without manually go through the importing steps?
Could the community advise, please? Thanks.