It's far beyond just a need for disk space. Consider the way that your application handles memory paging, how frequently you need to perform swapping - these are big things to consider.
Let's think of an example where you're running on Windows. A typical workstation build of Windows uses 4kb memory pages. If you're using varchar(255) to store a phone number, you can hold 16 phone numbers in that region (4096 / 255), with 16 bytes of overhang. If you're using varchar(50), you're at 81 with 46 bytes of overhang (4096 / 50). If you use 16 bytes to store a phone number (reasonable even with markup, such as (123) 456-7890, which is 14 characters), you can store a whopping 256 phone numbers in a single page of memory.
Now consider a different problem: zip codes. Supposing you store zip codes as varchar, if you use varchar(255) you're still at your 16 limit for a single page. But let's say I have this table:
CREATE TABLE Addresses
{
AddressID int PRIMARY KEY UNIQUE NOT NULL IDENTITY(1, 1),
AddressLine1 varchar(40) NOT NULL,
AddressLine2 varchar(40) NOT NULL DEFAULT(''),
City varchar(25) NOT NULL,
State varchar(6) NOT NULL,
ZipCode varchar(10) NOT NULL
}
Now, I want to query the database for all of the users in zip code 85282. Suppose there are a million rows: at 16 rows per memory page, that's potentially up to 62,500 page faults that need to occur in order to read every row and check the zip code. Alternatively, using my suggested field size of 10 here (which accomodates zip+4 in the US), you can fit 409 zip codes per page, resulting in 2,445 page faults max - a factor of 25 reduction!
When you consider the types of scalability we have, and the need to architect for performance, paging - while something we typically let the OS handle - is still a major factor since the disk is so incredibly slow compared to the physical memory supported by the system. The answer isn't to simply throw more memory at it - it's to be careful about how we build a system in the first place.