Why would one assign tinyint or integer? If your data for one given column has 15 discrete values then tinyint obviously.
The same applies to varchar: it tells SQL Server what values/lengths are expected in this column. SQL Server throws an error it the data would be truncated.
You could apply the same argument for NULL/NOT NULL, or foreign keys or CHECK constraints etc: they are all there to keep your data correct. See "Declarative referential integrity".
For example, I'd want to disallow someone trying to store 500k of XMl in my 100 byte name plain text column because they can. If someone did succeeds, what do think will happen to other clients that expected 100 bytes maximum?
It's also important for storage efficiency. It's Ok to declare "String" for a c# object that you may instantiate, use, discard and stays in memory (mostly) for it's short life. Persisting a billion rows of "string" is an unnecessary overhead.
One may go further and ask why use varchar? Why not use nvarchar everywhere? Again, I do have table that store currency code with approaching a billion rows. nchar(3) vs char(3) costs me an extra 3GB of storage (+ indexes + row length changes).
Summary: it's a constraint on your data.