I agree with the other answer (by wrang-wrang) "in theory".
In practice Ackerman is not too useful, because in practice the only algorithm complexities you tend to encounter involve 1, N, N^2, N^3, and each of those multipled by logN. (And since logN is never more than 64, it's effectively a constant term anyway.)
The point being, "in practice", unless your algorithm complexity is "N times too big", you don't care about complexity, because real-world factors will dominate. (A function that executes in O(inverse-Ackermann) is theoretically better than a function that executes in O(logN) time, but in practice, you'll measure the two actual implementations against real-world data and select whichever actually performs better. In contrast, complexity theory does "matter in practice" for e.g. N versus N^2, where the algorithmic complexity effects do in fact overpower any "real world" effects. I find that "N" is the smallest measure that matters in practice.)