It's no question that we should code our applications to protect themselves against malicious, curious, and/or careless users, but what about from current and/or future colleagues?
For example, I'm writing a web-based API that accepts parameters from the user. Some of these parameters may map to values in a configuration file. If the user messes with the URL and provides an invalid value for the parameter, my application would error out when trying to read from that section of the configuration file that doesn't exist. So of course, I scrub params before trying to read from the config file.
Now, what if down the road, another developer works on this application, adds another valid value for this parameter that would pass the scrubbing process, but doesn't add the corresponding section to the configuration file. Remember, I'm only protecting the app from bad users, not bad coders. My application would fail.
On one hand, I know that all changes should be tested before moving to production and something like this would undoubtedly come up in a decent testing session, but on the other hand, I try to build my applications to resist failure as best as possible. I just don't know if it's "right" to include modification of my code by colleagues in the list of potential points of failure.
For this project, I opted not to check if the relevant section of the config file existed. As the current developer, I wouldn't allow the user to specify a parameter value that would cause failure, so I would expect a future developer to not introduce behavior into a production environment that could cause failure... or at least eliminate such a case during testing.
What do you think?
Lazy... or philosophically sound?