I agree completely with what hhafez and nos said. You're going down the right path with a logging package instead of trying to roll your own. It's much cleaner and easier to get right. Logging to a text file is much easier to manage long term (given typical project skill sets) than DB logging, though if you're planning any complex analysis of reported data, sometimes it's easier just having it already in a DB.
If debugging is one of your stated objectives for implementing a logging solution, then it's imperative that you standardize all your log levels up front and make that part of your code review process. Have enough differences in granularities so that you can gradually increase the depth of reporting by going to the next level. It's very frustrating to be troubleshooting a PROD problem, not have enough log info to see the problem, then increase to the next level of logging and completely swamp the logs with so much spew that you can't see the forest for the trees (and your logs roll every 5 minutes because of the volume). I've seen it happen.
In most cases of text file logging, performance should not be an issue. It's a little trickier with DB logging. Doing an insert is only slightly more intensive than appending to a text file, but it's the volume-per-unit-time that makes it much uglier at scale.
Also, if you're going to do any offline log analysis, you ought to pick a log file format that's easily extensible and won't require tremendous changes to the analysis code if you need to add something to the log. Stay away from nested, multi-part message structures. Parsing those gets to be a pain.
Good luck with it!