You're asking if your code is exploitable. Yes. All code is exploitable. You might not think it is because you think you've covered the situations that you can think about, but the other side typically finds a situation you haven't thought about.
Security is more than just the code. You have to consider the environment it runs it, what else the user was allowed to do before he ran your code, etc. etc.
If you're truly worried about what might happen with this code, create a risk matrix. Start with the part that you're worried about and list all of its assumptions. For instance, in your case you might start with:
- /home/username is the directory I think it is (i.e. not a mount point, symlink, fake user, etc)
- the supplied path is one I expect and is allowed to exist
- the path is a regular file (e.g. not a special device)
- the path has a certain owner, group, or mode
- I'm running the
perl
I think I am (no path attack in finding executable)
PERL5LIB
, PERL5OPT
, or -I
did not front-load module load paths (no path attack in finding modules)
And so on and so on. Once you develop all of your assumptions, you ensure that they are valid by locking down those cases. You also find all of their assumptions, and lock down those, and so on. Perl's taint checking will help with some of those (and I talk about it in more depth in Mastering Perl).
Successful attacks are often indirect ones. For instance, I was part of a job to secure some data in a very rich and paranoid bank. We did all the computery stuff we could do, and one of my co-workers, in idle conversation, asked how they did the task before we installed the server. They said, "Oh, the data is on a binder on so-and-so's desk". Despite all of our work, their pay, and everyone's time and effort, anyone on the inside who wanted the data could quite literally walk off with it no matter what we did with the server.
Now that you have your risk matrix, you start developing your risk tolerance. Nothing is ever going to be perfect, and you could work to the heat death of the universe locking everything down. Instead of being perfect, you settle for how much risk you're willing to take on for each part of the code. You figure out what could happen if one part is compromised and how much that would cost you (in dollars, reputation, whatever) and figure out how much work that is worth to you (or your employers). You do just enough work to be below your risk tolerance.
The problem is that even the best people will miss something. Small cracks in security might not seem that important, but if you put enough together you can eventually bootstrap yourself into an exploitable situation. Security is holistic.