You check the length way too often (an O(n) operation) when you could just fix up each case afterwards, so at a cost a a couple reverses:
import Data.List.Grouping(splitEvery)
foo :: Integer -> [String]
foo = map (reverse . fixIt) . splitEvery 3 . reverse . show
where
fixIt [a] = a:'0':'0':[]
fixIt [a,b] = a:b:'0':[]
fixIt lst = lst
Another way is to check the length of the list ONCE and add padding in the first function. This avoids the extra reverse for the cost of a single list traversal (note that isn't much savings). In go
I just assume the list is length mod 3
(because it is) and always take 3.
import Data.List.Grouping
foo :: Integer -> [String]
foo x | r == 2 = go ('0':s)
| r == 1 = go ('0':'0':s)
| otherwise = go s
where
r = l `rem` 3
l = length s
s = show x
go = reverse . splitEvery 3
And not that it matters one bit what the performance is here (other code will dominate) but I like to hit things with Criterion for fun:
benchmarking doubleRev -- my first one
mean: 14.98601 us, lb 14.97309 us, ub 15.00181 us, ci 0.950
benchmarking singleRev -- my second one
mean: 13.64535 us, lb 13.62470 us, ub 13.69482 us, ci 0.950
benchmarking simpleNumeric -- this is sepp2k
mean: 23.03267 us, lb 23.01467 us, ub 23.05799 us, ci 0.950
benchmarking jetxee -- jetxee beats all
mean: 10.55556 us, lb 10.54605 us, ub 10.56657 us, ci 0.950
benchmarking original
mean: 21.96451 us, lb 21.94825 us, ub 21.98329 us, ci 0.950
benchmarking luqui
mean: 17.21585 us, lb 17.19863 us, ub 17.25251 us, ci 0.950
-- benchmarked heinrich at a later time
-- His was ~ 20us
Also, this is a good opportunity to point out how your best guess about what is fastest often doesn't pan out (least, not for me). If you are wanting to optimize then profile and benchmark, don't guess.