OF COURSE a binary(16) will be MUCH faster - just do the quickest of calculations:
- a SQL Server page is always 8K
- if you have 16 bytes per entry, you can store 500 entries on a page
- with 4000 bytes per entry (nvarchar) you'll end up with 2 entries per page (worst case, if your NVARCHAR(2000) are fully populated)
If you have a table with 100'000 entries, you'll have to have 200 pages for the index with a binary(16) key, while you'll need 50'000 pages for the same index with nvarchar(2000)
Even just the added I/O to read and scan all those pages is going to kill any performance you might have had........
Marc
UPDATE:
For my usual indexes, I try to avoid compound indexes as much as I can - referencing them from other tables just gets rather messy (WHERE clauses with several equality comparisons).
Also, regularly check and maintain your indices - if you have more than 30% fragmentation, rebuild - if you have 5-30% fragmentation, reorganize. Check out an automatic, well tested DB Index maintenance script at http://sqlfool.com/2009/06/index-defrag-script-v30/
For the clustered key on a SQL Server table, try to avoid GUID's since they're random in nature and thus cause potentially massive index fragmentation and therefore hurt performance. Also, while not a hard requirement, try to make sure your clustered key is unique - if it's not, SQL Server will add a four-byte uniqueifier to it. Also, the clustered key gets added to each and every entry in each and every non-clustered index - so in the clustered key, it's extremely important to have a small, unique, stable (non-changing) column (optimally it's ever-increasing , that gives you the best characteristics and performance --> INT IDENTITY is perfect).