I asked a question before about creating a shuffled range - rather than generating a list of numbers and then shuffling them, I wanted a function that was able to iteratively return a list of shuffled numbers, without the O(n) memory cost:
http://stackoverflow.com/questions/464476/generating-shuffled-range-using-a-prng-rather-than-shuffling
If you create some kind of index for your file on disk, then you can create a shuffled version without paying the memory cost, which can be important for very large files. For an index, I suggest something simple, like a flat stream of the positions (as 32 or 64-bit integers) of every line start. That way, to extract the Nth line out of the text file, you can simply seek in the index stream to N*4 (or N*8 for 64-bit indexes) to discover the offset of the line start, and then seek to that position in the text file stream and read out a line.
Using this approach, you can shuffle extremely large files, without paying the in-memory cost. Of course, shuffling will mean extracting lines at random from the source file, which will not be as efficient as in-memory sorting unless the file is very small (fits in cache almost on first access) or very large (in which case memory thrashing will be worse than random seeks), or perhaps if you're not using a mechanical hard drive (e.g. SSD).
For your situation, 10K is really not a large number. Something in the region of 10 million lines, perhaps getting into several gigabytes of text (depending on line length of course), will be far more challenging, and that's where this approach (or something similar) would be necessary in 32-bit.