views:

303

answers:

3

I am using TableAdapter to insert records in table within a loop.

foreach(....)
{
  ....
  ....
  teamsTableAdapter.Insert(_teamid, _teamname);
  ....
}

Where TeamID is the primary key in the table and _teamID inserts it. Actually i am extracting data from XML file which contains unique teamId

After first run of this loop, Insert throws Duplicate Primary Key found Exception. To handle this, i have done this

foreach(....)
{
  ....
  ....

  try
  {
     _teamsTableAdapter.Insert(_teamid, _teamname);
  }
  catch (System.Data.SqlClient.SqlException e)
  {
     if (e.Number != 2627)
        MessageBox.Show(e.Message);
  }
  ....
  ....
}

But using try catch statement is costly, how to avoid this exception. I am working in VS2010 and INSERT ... ON DUPLICATE KEY UPDATE does not work.

I want to avoid try catch statements and handle it without the use of try catch statements.

A: 

Does the table you're using have a primary key? If not, you should create one as it will prevent duplicate records, and might make it easier to access keys for other parts of your program.

Usually this is done with an Identity Column, or something similar. (Which it looks like you might already have in terms of TeamID, in which case you only need to change it to primary key in either SQL-MS or VS2010).

Edit: To designate a primary key as an identity column (teamID in your example) using Visual Studio:

Go to the server explorer. Navigate to the relevant table. Right-click "Open Table Definition". Click on the primary key column. Scroll the properties window until you reach "identity specification". Change this to "yes" (you can set the increment / seed to whatever you wish. Usually 1,1 is fine). Now, all you have to do is insert a Team Name into the table, and the TeamID is automatically generated.

Raven Dreamer
Yes , teamId is the primary key
LifeH2O
Is teamID set up as an identity column? (I.e., self-generating, auto-labeling?)
Raven Dreamer
teamId is unique, i m extract teamId from xml files which contains unique teamId for each team. teamId is unique itself
LifeH2O
In that case, you can either add a third ID column, as outlined above in my answer, and delete duplicate teams after you've finished inserting (via another method) OR simply check your data before you add it to the table so that you're aware you're not adding duplicates.
Raven Dreamer
+1  A: 

Based on your comments to other answers, I would suggest that TeamID be changed from the primary key (if possible) and a new Idx column set up as the primary key. You can then set a trigger on your DB that, when a new record is inserted with a duplicate TeamID will update the original record and delete the new one.

If that is not possible, I would modify the stored procedure which is inserting the record so that, instead of just inserting, it first checks for a duplicate TeamID. If there isn't a duplicate id, the record can insert, Else it will just 'Select 0.'

pseudo-code example:

Declare @Count int
Set @Count = (Select Count(TeamId) From [Table] where TeamId = @TeamId)

If(@Count > 0)
  Begin
    Select 0
  End
Else
 --Insert Logic Here

Then, your Insert Method in code can, instead of being ExecuteNonQuery(), be ExecuteScalar(). Your code would handle that this way

If(_teams.TableAdapter.Insert(_teamId, _teamName) == 0)
  {
    _teams.TableAdapter.Update(_teamId, _teamName)
  }

Alternatively, if you just wanted to handle it all in SQL (so your C# code doesn't have to change) you could do something like this:

Declare @Count int
Set @Count = Select Count(TeamId) from [Table] Where TeamId = @TeamId

If(@Count > 0)
  Begin
    //Update Logic
  End
Else
  Begin
    //Insert Logic
  End

But, again, I'd just modify the table if that's an option.

AllenG
A: 

There are clearly duplicates in your data. Either you need to eliminate them first or use some type of merge statment to do an isert if new or an update if not new.

To see what data is casueing the problem, run profiler while you run the loop from your application and see waht statments are actually being sent. That shoudl point you towards which record(s) are duplicated.

If this is a large file, bulk insert (after cleaning the dups) will be faster than row-by-row processing.

HLGEM