For having a wide table:
- Quick to report on as it's presumably denormalized and so no joins are needed.
- Easy to understand for end-consumers as they don't need to hold a data model in their heads.
Against having a wide table:
- Probably need to have multiple composite indexes to get good query performance
- More difficult to maintain data consistency i.e. need to update multiple rows when data changes if that data is on multiple rows
- As you're having to update multiple rows and maintain multiple indexes, concurrent performance for updates may become an issue as locks escalate.
- You might end up with records with loads of nulls in columns if the attribute isn't relevant to the entity on that row which can make handling results awkward.
- If lazy developers do a
SELECT *
from the table you end up dragging loads of data across the network, so you generally have to maintain suitable subset views.
So it all really depends on what you're doing. If the main purpose of the table is OLAP reporting and updates are infrequent and affect few rows then perhaps a wide, denormalized table is the right thing to have. In an OLTP environment then it's probably not and you should prefer narrower tables. (I generally design in 3NF and then denormalize for query performance as I go along.)
You could always take the approach of normalizing and providing a wide-view for readers if that's what they want to see.
Without knowing more about the situation it's not really possible to say more about the pros and cons in your particular circumstance.
Edit:
Given what you've said in your comments, have you considered just having a long & skinny name=value pair table so you'd just have UserId, PropertyName, PropertyValue columns? You might want to add in some other meta-attributes into it too; timestamp, version, or whatever. SQL Server is quite efficient at handling these sorts of tables so don't discount a simple solution like this out-of-hand.