I wanted to move a sql file to my staging server that was related to some coding. I overlooked the fact that it was 1Gig in size and committed the file. I don't remember if it ever finished committing but I continued committing other changes from different files. Now when I'm on my staging server and I fetch or pull the changes from development I get an out of memory error. I watched top on my dev system and all of the memory is being used during the fetch.
Is there a way to have Git just totally forget I ever committed the large sql file? My dev system is MacOS and the staging is Linux.