In the past, computers had little memory. That was the prime reason why you had different data types. If you needed a variable to only hold small numbers, you could use an 8-bit char instead of using a 32-bit long. However, memory is cheap today. Therefore, this reason is less applicable now but has stuck anyway.
However, bear in mind that every processor has a default data type in the sense that it operates at a certain width (usually 32-bit). So, if you used an 8-bit char, the value would need to be extended to 32-bits and back again for computation. This may actually slow down your algorithm slightly.