Making columns larger than they need to be can be actively harmful to your database design. From BOL:
A table can contain a maximum of 8,060 bytes per row. In SQL Server 2008, this restriction is relaxed for tables that contain varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns....Surpassing the 8,060-byte row-size limit might affect performance because SQL Server still maintains a limit of 8 KB per page. When a combination of varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns exceeds this limit, the SQL Server Database Engine moves the record column with the largest width to another page in the ROW_OVERFLOW_DATA allocation unit, while maintaining a 24-byte pointer on the original page. Moving large records to another page occurs dynamically as records are lengthened based on update operations. Update operations that shorten records may cause records to be moved back to the original page in the IN_ROW_DATA allocation unit. Also, querying and performing other select operations, such as sorts or joins on large records that contain row-overflow data slows processing time, because these records are processed synchronously instead of asynchronously.
I've found if you give them the extra size sooner or later they will use it. Further, if you set something as varchr (64) and you only really need 10 characters max, you are making it more likely someone will use the field for other than it's intended purpose and you will find that you get bad data inthose fields (like a phone number field containing notes about the office secretary to contact to pick a not so random example).
However at least this design is far better than making everything nvarchar (max).