I'll leave aside the "slow" and "bug-ridden" parts, because they're basically cheap shots. Nothing prevents a deliberate, formally specified implementation of all of Common Lisp from being slow or bug ridden.
As for the rest, I think it tends to be true, but a lot of the reason for it being true is that Common Lisp provides quite a bit, and C provides so very little. Stuff as basic as hash tables, linked lists and expandable vectors are left up to the user to implement. All the memory management is manual, but you can automate it by doing ref-counting or even by adding a garbage collector. You can roll your own polymorphic OO by stashing function pointers in a struct somewhere and using it as a vtable. You may add an interpreter for a language to let you easily script your application, or ease interaction and testing at runtime.
That's pretty much half of Common Lisp; Perl really provides all of it and some other Lispy features to boot. But it doesn't really seem like that much of a big deal, because one thing that's changed in the 15+ years since Greenspun coined his Tenth Law (while skipping the other nine) is that lots of languages have added that same half of Common Lisp. Guy Steele said that about Java a few years later---it was dragging programmers halfway to Lisp.