I you import the CSV into a table, you'll find it a great deal easier to work with, and much faster than working off the CSV directly. Example of how to do this is on CodeProject.
You can then page through these records using server side paging for efficiency.
Update:
If the CSV matches a defined schema, then you wouldn't have the create the tables dynamically. I'd have two tables. One to store a reference to the File (uniqueId, filename/path, UserId), and another for the FileItems (your schema plus File.UniqueId foreign keyed). In that way, you can lock the file to the user that is accessing it. Locking and concurrency issues are a separate matter, and there are lots of ways you could approach that.
If you want to perform operations directly on 60k records in a CSV file, you will find it slow. There is no way around that unless you cache the dataset from the CSV and work against the cache. To me the database storage is a better medium than the cache. It also gives you a historical record of what is being done, plus if anything errors while you are working in the Grid, you have the data persisted in the database without having to re-upload the CSV file.