There's two possible questions here: what to do to allow the binary to run, and what to do to allow the source to compile and run.
There isn't that much you can do to make the binary future-proof. Go strictly by the published API, and avoid using anything undocumented. It will run if the future system supports it, and the future system is far more likely to support the standard API than anything undocumented. This was the problem with many early Macintosh programs: instead of using the API (which was clumsy for some things early on), they used shortcuts that worked in OS 5 or whatever, and didn't in OS 7.
This advice is mostly for C and C++, as languages like Java define things much better. Any pure Java program should run fine on any later JVM. (Yes, this has its own costs.)
Abstract out all the architecture-dependent stuff you can. Use types like size_t
and ptrdiff_t
in C and C++, rather than any type of integer.
When you need a type of a particular bit size, don't give it a type like int
or long
. Use typedefs. There's a C99 header with useful typedefs for the purpose, but you can always have something like typedef int int32_t
and change the int
later, as needed, in one obvious place rather than scattered around the program in hard-to-find places.
Try to encapsulate OS calls, since those could change in a future architecture. If you must do anything with an undocumented OS feature, document it very noticeably.
If your program has anything to do with networking, assume nothing about the byte order. Networking byte order is unlikely to change, but your program might wind up on a chip with a different architecture (cf. the Macintosh, which has used three different architectures in its time).
In general, assume as little as you can get away with. Use types specifically designated for machine-dependent things, and use them consistently. Do everything outside the program as written in the most formal, standard, and documented way possible.