I've got a project where we'll need to generate a lot of fixed-length random codes (Read: Millions) from a set of characters (EG: 12 digit alpha-numeric or 9 digit alpha-numeric lower-case only without the l character). We're going to then store these codes in an MSSQL database (SQL Server 2008). The language we're using is C#.
We also need to be able to generate more codes and add them to an existing set of codes with them being unique against themselves and the existing codes. The quantity of random codes generated will likely vary from millions down to merely hundreds.
The two obvious approaches that come to mind is either to generate codes and just throw them at the database catching unique constraint exceptions or to pull the data down locally into a hash table then calculate all the new codes locally and put them into the database once generated.
Does anyone have any idea which of the above solutions would be more optimal or even better another solution that's more efficient that I haven't thought of?
Clarifications
The codes generated have to be non-predictable and there'll be multiple batches, each with uniqueness within themselves (EG: We'd have code set A with 100000 unique codes, code set B with 100000 unique codes, but there'd be no restriction that A intersect B is empty). They also have to be easy for a human to use (Hence the short length and potentially restricted character sets to avoid ambiguous characters).
The codes will be sent to users via various methods (Email, SMS, printed on paper, etc) and are used in a 1-use manner later (So if someone guesses someone else's code it'd be bad).