First of all, look at the query plan to see what it is doing. This will tell you if it is using the index. One second for a single row test/insert is too slow. For 350k rows this is long enough for it to do a table scan over a cached table.
Second. Look at the physical layout of your server. Do you have something like logs and data sharing the same disk?
Thirdly, check that the index columns on your unique key are in the same order as the predicate on the select query. Differences in order may confuse the query optimizer.
Fourthly, consider a clustered index on the unique key. If this is your main mode of looking up the row it will reduce the disk accesses as the table data is physically stored with the clustered indexes. See This for a blurb about clustered indexes. Set the table up with a generous fill factor.
Unless you have blob columns, 350k rows is way below the threshold where partitioning should make a difference. This size table should fit entirely in the cache.