That really depends on what procInEachDb does. I can understand the use case where you have the same code in each database that you need to execute independently because it relies on locally-scoped objects, manipulates data, pushes data elsewhere, etc. Not a lot of things can be done in multiple databases from a single operation, especially if the set of databases is dynamic (arguably with a static set, you could write UNIONs across databases etc., but you still couldn't write). But I'd really try to understand from the client what the procedure does and why it is broken out this way. It may very well be the best way to do what the client needs to do, and in that case there is no bad smell at all, except that maybe it could use better error handling. =)
I manage one system where we have 500+ roughly identical databases on an instance, and we need to run the same set of regularly scheduled tasks in each one. I don't want to create 500 Agent jobs, and I don't want to create a job with 500 steps, so instead I created a stored procedure with some tweaks that behaves like sp_MSForEachDB (which is essentially the same as the code you show above). I re-use this same procedure for a lot of commands that I want to run against all of the databases. The procedure also takes arguments that allow exclusions, etc.
Yes I could have done something similar with service broker, but I find that for some things I need more visibility and control over what is actually going on. And of course doing manual one-time things is easy with tools like SQL Farms Combine or Red-Gate's multi-script tool.