You should use NVARCHAR anytime you have to store multiple languages. I believe you have to use it for the Asian languages but don't quote me on it.
Here's the problem if you take Russian for example and store it in a varchar, you will be fine so long as you define the correct code page. But let's say your using a default english sql install, then the russian characters will not be handled correctly. If you were using NVARCHAR() they would be handled properly.
Edit
Ok let me quote MSDN and maybee I was to specific but you don't want to store more then one code page in a varcar column, while you can you shouldn't
When you deal with text data that is
stored in the char, varchar,
varchar(max), or text data type, the
most important limitation to consider
is that only information from a single
code page can be validated by the
system. (You can store data from
multiple code pages, but this is not
recommended.) The exact code page used
to validate and store the data depends
on the collation of the column. If a
column-level collation has not been
defined, the collation of the database
is used. To determine the code page
that is used for a given column, you
can use the COLLATIONPROPERTY
function, as shown in the following
code examples:
Here's some more:
This example illustrates the fact that
many locales, such as Georgian and
Hindi, do not have code pages, as they
are Unicode-only collations. Those
collations are not appropriate for
columns that use the char, varchar, or
text data type
So Georgian or Hindi really need to be stored as nvarchar. Arabic is also a problem:
Another problem you might encounter is
the inability to store data when not
all of the characters you wish to
support are contained in the code
page. In many cases, Windows considers
a particular code page to be a "best
fit" code page, which means there is
no guarantee that you can rely on the
code page to handle all text; it is
merely the best one available. An
example of this is the Arabic script:
it supports a wide array of languages,
including Baluchi, Berber, Farsi,
Kashmiri, Kazakh, Kirghiz, Pashto,
Sindhi, Uighur, Urdu, and more. All of
these languages have additional
characters beyond those in the Arabic
language as defined in Windows code
page 1256. If you attempt to store
these extra characters in a
non-Unicode column that has the Arabic
collation, the characters are
converted into question marks.
Something to keep in mind when you are using Unicode although you can store different languages in a single column you can only sort using a single collation. There are some languages that use latin characters but do not sort like other latin languages. Accents is a good example of this, I can't remeber the example but there was a eastern european language whose Y didn't sort like the English Y. Then there is the spanish ch which spanish users expet to be sorted after h.
All in all with all the issues you have to deal with when dealing with internalitionalization. It is my opinion that is easier to just use Unicode characters from the start, avoid the extra conversions and take the space hit. Hence my statement earlier.