It really depends on your definition of "efficient".
If you mean memory-efficient then you could use a stream reader so that you only have one line of text in memory at a time, unfortunately this is slower than loading the whole thing in at once and may lock the file.
If you mean in the shortest possible time, then this is a task that will gain great benefits from a parallel architecture. Split the file into chunks and pass each chunk off to a different thread to process. Of course that isn't especially CPU efficient, as it may put all your cores at a high level of usage.
If you are looking to just do the least amount of work is there anything you already know about the file? How often will it be updated? Are the first 10 characters of each line always the same? If you looked at 100 lines last time do you need to rescan those lines again? Any of these could create huge savings for both time and memory usage.
At the end of the day though there is no magic bullet, and to search a file is (at worst case) an O(n) operation.
Sorry, just re-read that, and it may come across as sarcastic, and I don't mean it to be. I just meant to emphasize that any gains you make in one area are likely to be loses elsewhere and "efficient" is a very ambiguous term in circumstances like these.