Do you think analyzing generate logs during testing cycles can increase quality?
I don't think that there is a good general answer to that question: It depends.
Analyzing logs can be very valuable if: a) there is good information to be mined b) the information is analyzed by people or machines that understand the bottom line, business logic.
without those two ingredients, they can be a resource (time and $) sinkhole
Are you thinking integration testing? or unit testing? I would think that logging anomalous events during integration testing, i.e., potential errors that your code logs and recovers from rather than crashing, might be useful. Run your integration tests, then check to see if there are any unexpected anomalies even though your tests pass -- or if they fail, use the logs to help trace the failures.
I can't really see where they would be very useful in unit testing. Your unit tests should be prescriptive. You might want to check that proper logging is taking place in your unit test for anomalous events, but I don't see how the logs should be able to tell you anything that the test output doesn't
My standard opinion is that logs are extremely useful every time they bring to light a problem and not useful at all if they don't tell me anything valuable. It will completely depend upon what they contain. I don't use them for unit testing but at the integration testing time, they can be quite valuable.
I've found logs particularly useful when the testing being done by someone at a different location. On more than one occasion, I was unable to duplicate a problem but after a look at the log data, it was clear what was happening.
Absolutely; not so much for unit tests as mentioned above, but if you set up a log analysis tool such as logwatch or Splunk that can send you a summary of errors found during the test, you can definitely increase quality.