It's simple math, yes? Let's say you're writing a application named "Thingy".
X
= how smart you are.
S(WriteThingy)
= how smart you need to be to write the code for Thingy.
S(DebugThingy)
= how smart you need to be to debug the code for Thingy.
Debugging is twice as hard as writing
the code in the first place.
So we get:
S(WriteThingy) = 2 * S(DebugThingy)
Given that:
if you write the code as cleverly as possible
We have:
X = S(WriteThingy)
Which basically means that you are no smarter than being able to write Thingy.
And since:
S(WriteThingy) < 2 * S(WriteThingy)
We get:
X = S(WriteThingy) < 2 * S(WriteThingy) = S(DebugThingy)
Or:
X < S(DebugThingy)
Which is basically what he said:
you are, by definition, not smart enough to debug it.