tags:

views:

419

answers:

4

When writing more then a trivial script in bash, I often wonder how to make the code testable.

It is typically hard to write test for bash code, due to the fact that it is low on functions that take value and return a value, and high on functions that check and set some aspect in the environment, modify the file-system, invoke a program, etc. Thus the setup and test code become much more complicated from the code it tests.


For example, consider a simple function to test:

function add_to_file() {
  local f=$1
  cat >> $f
  sort -u $f -o $f
}

Test code for this function might consist of:

add_to_file.before:

foo
bar
baz

add_to_file.after:

bar
baz
foo
qux

And test code:

function test_add_to_file() {
   cp add_to_file.{before,tmp}
   add_to_file add_to_file.tmp
   cmp add_to_file.{tmp,after} && echo pass || echo fail
   rm add_to_file.tmp
}

Here 5 lines of code, are tested by 6 lines of code and 7 lines of data.


Now consider a slightly more complicated case:

function distribute() {
   local file=$1 ; shift
   local hosts=( "$@" )
   for host in "${hosts[@]}" ; do
     rsync -ae ssh $file $host:$file
   done
}

I can't even say how to start write a test for that...


So, is there a good way to do TDD in bash scripts, or should I give up and put my efforts elsewhere?

+4  A: 

If you code a bash program large enough to require TDD, you are using the wrong language.

I suggest you to read my previous post on best practices in bash programming, you will probably find something useful to make your bash program testable, but my statement above stays.

http://stackoverflow.com/questions/78497/design-patterns-or-best-practices-for-shell-scripts/739034#739034

Stefano Borini
No. 1, Bash is a capable language, it can be the right tool for many jobs. 2. Even if this is true, I might not be in the position to choose the tool.This answer simply avoids the question. Sorry.
Chen Levy
bash is the right tool for many jobs, but not for all of them. If a task is clearly too demanding for a bash script, you are using the wrong tool. As for point 2, I can agree with you, but be very wary of a manager forcing you to use a blatantly wrong tool for a complex task. If it screws up, it will be your fault.
Stefano Borini
You wrote yourself: "I had to use it due to human factors and legacy", it is not "a manager forcing [me] to use a blatantly wrong tool". Anyway thanks for the link to http://stackoverflow.com/questions/78497/design-patterns-or-best-practices-for-shell-scripts/.
Chen Levy
+5  A: 

If you are writing code at the same time with tests, try to make it high on functions that don't use anything besides their parameters and don't modify environment. That is, if your function might as well run in a subshell, then it will be easy to test. It takes some arguments and outputs something to stdout, or to a file, or maybe it does something on the system, but caller does not feel side effects.

Yes, you will end up with big chain of functions passing down some WORKING_DIR variable that might as well be global, but this is minor inconvenience comparing to the task of tracking what does each function read and modify. Enabling unit tests is just a free bonus too.

Try to minimize cases where you need output. A little subshell abuse will go long way to keeping things nicely separated (at the expense of performance).

Instead of linear structure, where functions are called, set some environment, then other ones are called, all pretty much on one level, try to go for deep call tree with minimum data going back. Returning stuff in bash is inconvenient if you adopt self-imposed abstinence from global vars...

Eugene
I sense a book coming on: "Functional programming in POSIX sh".
Jörg W Mittag
Yes! We need first class function pointers! Or very cheap evals!
Eugene
+4  A: 

From an implementation point of view, I suggest shUnit.

From a practical point of view, I suggest not to give up. I use TDD on bash scripts and I confirm that it is worth the effort.

Of course, I get about twice as many lines of test than of code but with complex scripts, efforts in testing are a good investment. This is true in particular when your client changes its mind near the end of the project and modifies some requirements. Having a regression test suite is a big aid in changing complex bash code.

mouviciel
+5  A: 

So here is what I learned:

  1. It is not so much that Bash is not suitable for TDD (although some other languages come to mind that are a better fit), but the typical tasks that Bash is used for (Installation, System configuration), that are hard to write tests for, and in particularly hard to setup the test.

  2. The poor data structures support in Bash, make it hard to separate logic from side-effect, and indeed there is typically little logic in Bash scripts. This make it mostly hard to break the scripts to testable chunks. There are some functions that can be tested, but that is the exception, not the rule.

  3. Function are a good thing (tm), but they can only go so far.

  4. Nested functions can be even better, but they are also limited.

  5. At the end of the day, with major effort some coverage can be obtained, but it will test the less interesting part of the code, and will keep the bulk of the testing as a good (or bad) old manual testing.

Meta: I decided to answer (and accept) my own question, because I was unable to choose between Sinan Ünür's (voted up) and mouviciel's (voted up) answers that where equally useful and insightful. I want to note Stefano Borini's answer, that although not impressed me initially, I learned to appreciate it over time. Also his design patterns or best practices for shell scripts answer (voted up) referred above was useful.

Chen Levy