Hi community,
I have to search a huge number of text files (all over a Unix server disks') for a given string (I have to). Given the time and resources this will take, I'm thinking coming out with the list of files that do contain the token in question is a meager result, compared to the investment.
This feels wrong.
Considering that I will have to parse all these files anyway, wouldn't it be more profitable to build an index of this content, at least for statistics?
How can I do that? What tool?
Any hints appreciated :)