views:

31

answers:

1

Having a Unicode (multi-byte charset) Oracle database with NLS_LENGTH_SEMANTICS=BYTE seems like a disaster waiting to happen. Field validation in most applications only check the number of characters is within bounds, not the byte sequence length in the database’s default character encoding scheme! If you've got a Unicode database, is there ever a good reason to use NLS_LENGTH_SEMANTICS=BYTE rather than CHAR?

+2  A: 

It's just legacy, I think. There are plenty of old applications that have always worked on bytes and may get confused if this changes. Anything where byte strings and indexes go off to an external app/language that works in bytes is going to go wrong in weird and unpredictable ways if the indexes are redefined underneath it.

I would not use byte semantics for a new application and I agree that it's not a good default. But then hopefully you're using NVARCHAR, which avoids the issue (since it's always character-based).

bobince
Specifically, I know that Crystal Management Server on Oracle (for configuration database) requires NLS_LENGTH_SEMANTICS=BYTE. It throws a fit until you get to SP2. The work around is a logon trigger applied to the crystal user.
REW