This is in reference to Stack Overflow Podcast #65. Assume a typical 60's or 70's server computer with, say, 256k main memory. How large (compiled) COBOL programs could such a machine run at maximum? How severely would this limit the complexity and capabilities of COBOL programs, assuming that the programs are not deliberately made more complex than necessary?
How would you like to measure this? I remember one program that I'm pretty sure we ran in a 256k or 512k system that was about two inches thick when printed out. We didn't have to use overlays, either.
Fairly large cobol programs can run in 256K ram in a 70's mainframe. (256K of memory in an IBM 370 was 256K 32-bit words, not bytes.) IBM introduced Virtual Memory around 1970. This paged the program and data to disk, allowing a program to use most of the 24-bit address space, with limitations. Just like Windows!
IBM mainframe operating systems supported virtual storage back then - although you could buy a condo on the beach today for what the yearly IBM lease was! I don't remember any insurmountable program size issues.
One thing to consider is that back then almost everything was run in "batch programming" mode. This limited how complex any one program needed to be. One program would pre-process the data and store it on disk. The next might sort it and add some calculated result. Then next might update a database. Then the last one in the batch might print out a report. So complexity (and size) was broken up over several programs running in sequence.
I was an administrator of a Unisys System 1100 that had 1MB of main storage. We supported about 150 users of a fairly complex munitions inventory system. The application was written COBOL.