Setting the entire buffer to NUL characters is a "defense in depth." Such a defense covers for a mistake made elsewhere in the source code, perhaps by a different programmer. In this case, the mistake guarded against would be copying a string which does fit the buffer, but not copying the NUL termination byte. The already-zero'd buffer would provide terminating NULs for this mistaken string copy to "use." Programmers differ on the wisdom of "Defense in Depth" because such coding can mask programming errors which are allowed then to fester in the source code -- being fixed only long after they are introduced.
In my personal opinion, setting the buffer to all NUL characters like this as a "defense in depth" is a huge waste. It would make more sense to only NUL the final byte. Then errors would show through but strings would eventually be terminated. Once you start going down that path of thought, the "defense in depth" would make more sense if the buffer were made two machine words longer, and those words were zero-ed out, and possible a canary value could report an overrun of the buffer, and....
Or you could just not overrun buffers, and write your program so that it crashes as quickly as possible if you do. That's what I like to do.