In a word, efficiency. If a data structure is guaranteed, at compile time, to have fixed size then, as one of the earlier respondents noted, all sorts of useful optimisations for access can be applied. And for those of us who play linear algebra on computers this behaviour is ideal. Of course, the actual size may be allocated at run time, but the compiler knows that the size won't change while the array remains allocated so can apply many optimisations.
A lot of these optimisations will arise because the program can compute a dope vector for the array once and read or write any element with one address calculation. If you allow variable sizes you either:
-- have to allocate more space for the resized array and move the data across from old to new array; this is very expensive in time; OR
-- start using pointers to find next elements, or jump over deleted elements; this can be expensive in space and somewhat expensive in time too.
Other optimisations come from the compiler knowing that if you want the next, say, 24 array elements, it only has to shift the next 24 x b bytes from memory to cache, where b is the size of each element in the array. Since memory bandwidth is one of the key bottlenecks in high-performance computing this sort of behaviour is very very desirable.
I'll stick my neck out and claim that if it isn't fixed in size it isn't an array, it's something else: a linked list, a stack, a set, a what-have-you but not an array. When you get to 2D arrays (and I mean real 2D arrays, not 1D arrays of 1D arrays) removing or adding single elements is conceptually a difficult problem too.
So, to answer your question, some languages implement the ADT of an Array correctly, that is they don't provide resizing operations because Arrays don't have resizing operations.
Regards
Mark