I've never seen it make a difference in the table storage, joins or basic operations.
But I have seen it make a difference where string processing is involved.
In SQL Server 2005, the only case I've seen where varchar size makes significant differences is when things are cast to varchar(MAX) or in UDFs. There appears to be some difference there - for instance, I have a system/process I'm porting with a number of key columns that have to be concatenated together into another pseudo keyfield (until I am able to refactor out this abomination) and the query performed significantly better casting the result as varchar(45) as soon as possible in the intermediate CTEs or nested queries.
I've seen cases where a UDF taking and returning a varchar(MAX) performs significantly more poorly than one taking and returning varchar(50), say. For instance, a trimming or padding function which someone (perhaps me!) was trying to make future proof. varchar(MAX) has it's place, but in my experience it can be dangerous to performance.
I do not think I profiled a difference between varchar(50) and varchar(1000).