views:

256

answers:

6

It's no question that we should code our applications to protect themselves against malicious, curious, and/or careless users, but what about from current and/or future colleagues?

For example, I'm writing a web-based API that accepts parameters from the user. Some of these parameters may map to values in a configuration file. If the user messes with the URL and provides an invalid value for the parameter, my application would error out when trying to read from that section of the configuration file that doesn't exist. So of course, I scrub params before trying to read from the config file.

Now, what if down the road, another developer works on this application, adds another valid value for this parameter that would pass the scrubbing process, but doesn't add the corresponding section to the configuration file. Remember, I'm only protecting the app from bad users, not bad coders. My application would fail.

On one hand, I know that all changes should be tested before moving to production and something like this would undoubtedly come up in a decent testing session, but on the other hand, I try to build my applications to resist failure as best as possible. I just don't know if it's "right" to include modification of my code by colleagues in the list of potential points of failure.

For this project, I opted not to check if the relevant section of the config file existed. As the current developer, I wouldn't allow the user to specify a parameter value that would cause failure, so I would expect a future developer to not introduce behavior into a production environment that could cause failure... or at least eliminate such a case during testing.

What do you think?

Lazy... or philosophically sound?

A: 

I think it is the future developer's responsibility to ensure that she does not introduce bugs or failure points into your code. When you sign off from the project (if ever?!), then part of the signing process should at least be that the code has been presented as bug free as possible, this would then limit the liability you hold for future problems.

If your code is kept in a version control system, it would be trivial for you to create a tag which would mark the point at which you handed over your code, thus enabling you to compare the current codebase to your original should a bug arise which some one may be trying to blame on you (if this is the angle you're coming from!), therefore allowing you to prove that it is the changes that have been made to your original implementation which has caused these bugs. (Assuming of course that the implementations do cause unexpected behaviour and don't fix your "bug-free" code grin).

One method I have used in the past to ensure data integrity (and it isn't fool proof), is to check an input at specific offsets for specific values, ensuring that input wasn't tainted.

Hope that helps.

Gav
Just to expand on the sanity of input a little; think CSV files. If strings are not quoted and contain commas then the format breaks. Your application can either try to work around this (million-monkey/better-idiot-born-each-day problem), or just refuse to import it (snarky, but effective at ensuring incorrect values don't corrupt everybody else's work!).
Gav
Assigning blame is not productive. Any process that concentrates on assigning/passing/deflecting blame for mistakes is a waste of time that could be spent on something useful... like preventing the mistakes from being made in the first place.
Michael Borgwardt
Agreed. However, having version control can help one pinpoint bugs and how they are introduced into a system; whether that be by the original developer or a new developer. Having a history log of changes to an application can only help everybody understand its evolution.
Gav
+1  A: 

It sounds like you're taking reasonable steps to protect against incompetence by doing some scrubbing of the input. I don't believe that you're responsible for protecting against any possible misuse of your code or bad input. I'd go further than that and say that as long as your code explicitly documents what is and isn't an acceptable input then you've done enough, especially if the added "idiot error checking" code is bloated or (especially) slower.

A procedure that documents exactly what inputs are acceptable is reasonable for an inner api. That being said, I often code (over) defensively, but that's mostly due to the environment I'm in and my level of trust in the rest of the code.

Steve B.
A: 

Both ways are fine, philosophically speaking. I would make the judgment based on how likely you think it is for this to happen. If you can be almost positive that somebody will break your code in that way, it might be polite for you to provide a check that will catch that when it happens.

If it doesn't seem particularly likely, then it's just part of their job to make sure they don't break the code.

In either event though, your technical notes (or other appropriate documentation) should clearly indicate that when the one change is made, the other change is also required.

Adam Bellaire
+6  A: 

"I just don't know if it's "right" to include modification of my code by colleagues in the list of potential points of failure."

It isn't right to prevent your colleagues from breaking things.

You don't know what new purposes your software will be put to. You don't know how it will be modified in the future.

Instead, do this.

Write simple correct software, put it into production, and stop worrying about somebody "breaking" something.

If your software is actually simple, other people can maintain it without breaking it.

If you make it too complex, they will (a) break it anyway, in spite of everything you do and (b) hate you for making it complex.

So, make it as simple as possible to do the job.

S.Lott
I like this. The idea that your code should be self-sufficient and can be re-purposed without modification captures a certain Unix-style (TAOUP) vibe which is inspirational.
Gav
A: 

I would use an inline comment for future developers, or for a developer like me who tends to forget what was going on in every part of the application after I haven't worked on it for months.

Don't worry about actually coding to foil future coders, that's impossible. Just add all the information someone needs to support or extend what you're doing within the source code context.

This is documentation, as Steve B. mentioned. I'd just make sure it's not external, as that has a tendency to get lost.

Beth
+1  A: 

Lazy... or philosophically sound?

Lazy... and arrogant. By coding in a way that makes mistakes show up quickly, you protect the app against your own mistakes just as much as against the mistakes of others. Everyone makes more mistakes than they think.

Of course, rather than adding more code to detect whether the config file and the parameter checking match, it would be much better if the parameter checking were based on the config file so that there's only one place where new values are added and an inconsistency is not possible.

Michael Borgwardt
I agree. Sounds like the code is lacking some level of assertions in terms of what parameters are passed to it.Good unit testing, exception handling and sound use of assertions would totally mitigate this risk.Also, I am sure if you documented your code well enough, this issue could be easily averted.
Deep Kapadia