It's a tempting idea, and it certainly works if you're Paul Graham or Chuck Moore.
It might work if your domain is very very bounded, and you're not going to get a requirement thrown at you that's outside that domain, something as simple as a client asking for an "import from Excel" feature. On the other hand, Paul Graham used Lisp to write a web shop system, which is a very broad domain of requirements; I'd be interested in knowing how he handled something like a PDF export, would he have given the PDF spec and a Lisp manual to some intern on summer vacation from MIT, or would he have gone down to the C libraries?
It might work if your domain is driven by logic or by enduring natural principles, something like an astronomy simulation. If they're human requirements, they'll be full of contradictions and special cases (string and date libraries fall into that category, by the way), and there is no abstraction or language feature that entirely cuts through that, you'll have to slog through the special cases whether you're writing in Haskell or PHP.
It might work where optimisation is very very important (EDIT: and where you're smart enough to optimise it yourself) - you have a stripped down system where you know every layer of the stack because you've implemented it yourself with a particular goal in mind.
I associate the whole cluster of ideas with grad students: they're in the top 1% in programming skills and general smartness; they're working in a very narrow domain; they may not have the best equipment so they're trying to strip things down and optimise in depth; and they don't have the learning vs getting work done dilemma of working programmers.