What is the approximate ratio of time you typically spend debugging high-level versus low-level bugs?
For the purposes of this discussion, high-level bugs are things like incorrect algorithms, bad assumptions about input data and/or operating environment, cases that were overlooked in the initial implementation of something, forgetting to deal with boundary conditions/edge cases, etc. For the sake of discussion, they can also include poor design decisions that introduce limitations into the code and are non-trivial to change, even though this is stretching the definition of a bug. In other words, high-level bugs are bugs caused by not meaning what you should have meant.
Low-level bugs are things like memory management and corruption problems, null pointer dereferences that aren't caused by high-level bugs, subtle syntax errors that get past the compiler, off-by-one errors when doing stuff with arrays, etc. In other words, they are bugs caused by not coding what you meant.
Also, for the sake of comparison, what language(s) do you primarily use and what type of programming do you primarily do?