views:

71

answers:

4

Hi! I think this question is more about style: I have an algorithm that has a very high CC (and a lot of lines!). I want to reduce it, and it's easy since there are pieces of code that can be grouped. The problem is that doing things this way I would have a "big" function calling "small" functions which are only called once.

In my opinion breaking a big function in small pieces is better for the legibility of the code (in this case) despite functions are called once.

What do you thik? How do yo do in similar cases?

+6  A: 

Breaking a big function into smaller, mostly separate chunks is a perfectly good idea. It makes the code more readable and the control flow more clear.

If the functions are static and called only once, the compiler will probably inline them for you without even being asked, so there's no need to worry about runtime cost.

Thomas
+1 for mentioning inlining of static functions
tristopia
On second thought, even if not static, the functions could still be inlined, of course, at the cost of some code duplication. And even if that does not happen, link-time code generation might still do the inlining.
Thomas
+3  A: 

There is nothing bad with having functions that are only called once.
They keep your code tidy and you don't lose anything, just adding function calls, no real performance hit for functions you only call once.

Arkaitz Jimenez
A: 

As already said, splitting a big function in several smaller has many advantages. Readability (with proper naming), grouping of local variables (temporaries used in the functions are closer together giving a better cache behaviour), it can happen that one of these functions can be reused elsewhere, thing which was not visible before.

tristopia
+2  A: 

Going a step beyond inlining, there are a lot of functions that are called only once.

Lets say we have a structure like this:

typedef struct foo {
     char *foo;
     int bar;
     double foobar;
} foo_t;

And we write something to initialize / allocate it:

foo_t *foome(void)
{
    foo_t *ret;

    ret = (foo_t *) malloc(sizeof(struct foo));

    ...
    ...
}

But why did we go through all that trouble when foome() is called only once, in main()? Because we want the next person who has to deal with our program to be able to look at main() and immediately understand what we were trying to accomplish.

I'd rather see code that has dozens of one time functions, if it means a complex algorithm reads like a book in a single (or close) screen. I can't tell you how much my head hurts when I have to scroll up and down n hundred lines while attempting to keep my place.

This saves sanity and helps the environment, because now I don't need to print out all 60 pages of your algorithm and arrange them on a table just to be able to read it :)

Tim Post