Personally, and maybe in part because I was using Unix long before vim existed (heck, the first version of Unix I used didn't have "vi" either - but that's another story), I would normally use a 'shell script' (or, more likely, a Perl script) to do the transform. For converting CSV data to INSERT, dealing with quotes/non-quotes and embedded commas in full generality is messy -- I'd probably aim to use a Perl script with Text::CSV_XS to guarantee correct parsing. I'd then run that script on the range of text that needed converting.
One advantage of this is the focussed tool approach - one tool does one job right. My private bin directory has 300 or more scripts and programs in it; the RCS directory has over 500 scripts in it.
This is not to say that scripting in vim is bad. I use (more or less) complex map commands to write complex manipulations, often when I'm about to have to do the same change across a suite of files, and when I don't think it will be worth creating a script for the job. However, if I think I might need the changes more than once, then I'll script it. For example, GCC started to get uppity (circa 2005) about not embedding unused static strings in object files - which meant my version control information wasn't visible. So, over a period of years, as I edit the source files, I've converted from a static (fixed) name to a public name - reluctantly, but necessarily AFAIAC. I have a script that does that edit for me, so when I need to make the change in a file, I run that script to do so. I have another script that updates the copyright information; I need that each time I first modify a file in a given year. Yeah, I could probably stash it away as something in vim -- I grew up thinking that the separate script is better, not least because if I switch to any other editor, I can still use the script.