I'm working on data migration from an old IBM Universe-based system to a new enterprise-level data-information management system, and learning about database design in the process.
I took a look at the back-end database structure of the new system (it's a MS SQL DB, with about 100 tables), and find some things pretty odd. But I don't know if my inexperience is the reason I think that and this is just standard practice, or if these oddities are really just bad database/application design.
For example:
- Some date fields are varchar(20)
- Fields that store measurements are varchar(50), rather than something like a decimal and an enum to store the units of measurement
- ISBN 10 & 13 number fields are varchar(50)
- Some look-up ID foreign keys are varchar(100), even though the actual look-up table primary key is an int
- Some fields are varchar(0)
- Additional separate fields for storing month & year, each of which is varchar(250) - I don't know what kind of design decision would require a maximum of 250 characters for the year, unless they really went overkill on their Y2K compliancy, or decided to use seconds since the beginning of the universe to store datetime
And plenty of others. The DB looks to be more than half varchar fields.
I should also mention that all the varchar fields in the DB are actually n-varchar - so it's all unicode, even the fields that only store numbers.
Is there any legitimate argument that using so many varchar fields could be the best option, in some circumstances? (flexibility... maybe...?)