Simple brute-force solution (very little memory consumption):
Do an n^2 pass through the file and remove duplicate lines. Speed: O(n^2), Memory: constant
Fast (but poor, memory consumption):
Stefan Kendall's solution: hash each line, store them in a map of some sort and remove a line whose has already exists. Speed: O(n), memory: O(n)
If you're willing to sacrifice file order (I assume not, but I'll add it):
You can sort the lines, then pass through removing duplicates. speed: O(n*log(n)), Memory: constant
edit:
If you dont like the idea of sorting the file contents or trying to maintain unique hashes but can handle O(n) memory usage: You can identify each line with it's 32 bit or 64 bit position marker (depending on the file's size) and sort the file positions instead of the file contents.
edit #2: caveat: in-memory sorting lines of different lengths is harder than doing it to say, an array of ints...actually, thinking about how the memory would have to shift and move in a merge step, I'm second guessing my ability to sort a file like that in n*log(n)