views:

2213

answers:

5

I have a dataSet populated from Excel Sheet. I wanted to use SQLBulk Copy to Insert Records in Lead_Hdr table where LeadId is PK.

I am having following error while executing this code The given ColumnMapping does not match up with any column in the source or destination

 
   string ConStr=ConfigurationManager.ConnectionStrings["ConStr"].ToString();
        using (SqlBulkCopy s = new SqlBulkCopy(ConStr,SqlBulkCopyOptions.KeepIdentity))
        {
            if (MySql.State==ConnectionState.Closed)
            {
                MySql.Open();
            }

                s.DestinationTableName = "PCRM_Lead_Hdr";
                s.NotifyAfter = 10000;

                #region Comment
                s.ColumnMappings.Clear();
                #region ColumnMapping
                s.ColumnMappings.Add("ClientID", "ClientID");
                s.ColumnMappings.Add("LeadID", "LeadID");
                s.ColumnMappings.Add("Company_Name", "Company_Name");
                s.ColumnMappings.Add("Website", "Website");
                s.ColumnMappings.Add("EmployeeCount", "EmployeeCount");
                s.ColumnMappings.Add("Revenue", "Revenue");
                s.ColumnMappings.Add("Address", "Address");
                s.ColumnMappings.Add("City", "City");

                s.ColumnMappings.Add("State", "State");
                s.ColumnMappings.Add("ZipCode", "ZipCode");
                s.ColumnMappings.Add("CountryId", "CountryId");

                s.ColumnMappings.Add("Phone", "Phone");
                s.ColumnMappings.Add("Fax", "Fax");
                s.ColumnMappings.Add("TimeZone", "TimeZone");
                s.ColumnMappings.Add("SicNo", "SicNo");
                s.ColumnMappings.Add("SicDesc", "SicDesc");

                s.ColumnMappings.Add("SourceID", "SourceID");
                s.ColumnMappings.Add("ResearchAnalysis", "ResearchAnalysis");
                s.ColumnMappings.Add("BasketID", "BasketID");
                s.ColumnMappings.Add("PipeLineStatusId", "PipeLineStatusId");

                s.ColumnMappings.Add("SurveyId", "SurveyId");
                s.ColumnMappings.Add("NextCallDate", "NextCallDate");
                s.ColumnMappings.Add("CurrentRecStatus", "CurrentRecStatus");
                s.ColumnMappings.Add("AssignedUserId", "AssignedUserId");
                s.ColumnMappings.Add("AssignedDate", "AssignedDate");
                s.ColumnMappings.Add("ToValueAmt", "ToValueAmt");
                s.ColumnMappings.Add("Remove", "Remove");
                s.ColumnMappings.Add("Release", "Release");

                s.ColumnMappings.Add("Insert_Date", "Insert_Date");
                s.ColumnMappings.Add("Insert_By", "Insert_By");
                s.ColumnMappings.Add("Updated_Date", "Updated_Date");
                s.ColumnMappings.Add("Updated_By", "Updated_By");

                #endregion
                #endregion

                s.WriteToServer(sourceTable);

                s.Close();

               MySql.Close();

+1  A: 

Well, is it right? Do the column names exist on both sides?

To be honest, I've never bothered with mappings. I like to keep things simple - I tend to have a staging table that looks like the input on the server, then I SqlBulkCopy into the staging table, and finally run a stored procedure to move the table from the staging table into the actual table; advantages:

  • no issues with live data corruption if the import fails at any point
  • I can put a transaction just around the SPROC
  • I can have the bcp work without logging, safe in the knowledge that the SPROC will be logged
  • it is simple ;-p (no messing with mappings)

As a final thought - if you are dealing with bulk data, you can get better throughput using IDataReader (since this is a streaming API, where-as DataTable is a buffered API). For example, I tend to hook CSV imports up using CsvReader as the source for a SqlBulkCopy. Alternatively, I have written shims around XmlReader to present each first-level element as a row in an IDataReader - very fast.

Marc Gravell
Well I am new to this "Staging" Term Can You please Explain it a bit
A staging table is a table that you dump your data into. Often, it doesn't have an index so that writing is faster. Dump into a staging table, manipulate data, and write from staging to your production tables using a stored procedure (wrap it in a transaction to protect from corruption).
Michael Meadows
Remember that staging table data is transient, so clear it out before you put data in, and don't use it to retrieve data except to transform it to output to another set of tables. Also, remember that staged data tends to be less normal than your "real" data.
Michael Meadows
@Michael - thanks for answering ;-p
Marc Gravell
A: 

The answer by Marc would be my recomendation (on using staging table). This ensures that if you're source doesn't change, you'll have fewer issues importing in the future.

However, in my experience, you can check the following issues:

Column names match in source and table That the column types match

If you think you did this and still no success. You can try the following.

1 - Allow nulls in all columns in your table 2 - comment out all column mappings 3 - rerun adding one column at a time until you find where your issue is

That should bring out the bug

Tony Basallo
A: 

What I have found is that the columns in the table and the columns in the input must at least match. You can have more columns in the table and the input will still load. If you have less you'll receive the error.

Rich
+1  A: 

I've encountered the same problem while copying data from access to SQLSERVER 2005 and i found that the column mappings are case sensitive on both data sources regardless of the databases sensitivity.

Tareq