You should have some kind of justification for every data type you use.
nvarchar(255) (in SQL Server) stores 255 Unicode characters (in 510 bytes plus overhead).
It's certainly possible to store ordinary UTF-8 encoded Unicode data in varchar columns - one varchar character per byte in the source (UTF-8 will use multiple bytes appropriately for wide characters). In this case, ordinary ASCII data uses only 1 byte per character so you don't have the double-byte overheads. It has a lot of drawbacks, not least of which being that the database no longer can help as much with collations and other character manipulation work since the data is potentially encoded. But, like I said, it's possible.
I recommend char or varchar characters of appropriate lengths for things like account numbers where a decimal might not be used because zero-padding matters, license numbers, invoice numbers (with letters), postal codes, phone numbers, etc. These are types of columns that NEVER contain any wide characters, and are usually restricted to roman letters and numbers only, sometimes not even punctuation, and are often heavily indexed. There is absolutely no need for the overhead of extra NUL high-bytes for all these characters in the columns in both tables and indexes and in the working set in the database engine.
I recommend nvarchar for things like names and addresses etc, where wide characters are possible, perhaps even when there is no foreseeable usage in the near term.
I typically never use nchar - I have never needed short codes (typically where I chose char columns) which needed wide characters.
In all cases, the length (or max) usage really should be fully thought about. I would definitely not use max for names or addresses, and the overhead can be obvious in benchmarking. I have seen casting to varchar(length) in intermediate stages of queries drastically improve performance.