tags:

views:

40

answers:

1

I wanted to move a sql file to my staging server that was related to some coding. I overlooked the fact that it was 1Gig in size and committed the file. I don't remember if it ever finished committing but I continued committing other changes from different files. Now when I'm on my staging server and I fetch or pull the changes from development I get an out of memory error. I watched top on my dev system and all of the memory is being used during the fetch.

Is there a way to have Git just totally forget I ever committed the large sql file? My dev system is MacOS and the staging is Linux.

+2  A: 

Yes. But you'll have to inform anyone who pulled to re-clone the repository to avoid headaches later. What you need to do is rebase-interactive the branch where the commit is and delete the commit. The commit will stay around for about 30 days (by default) from the moment no other reference (branch, tag etc.) refers to it. This gives you enough time to change your mind later. Of course, if you wish you can prune the reflog and then garbage-collect the repository to shrink the repository right away.

For example:

git checkout contaminated-branch
git rebase -i HEAD~100  # if 100 commits ago is long enough

Then you'll have an editor open up, listing the commits since 100 commits ago. Find the commit where you added that large file, and delete its row in the editor. Save and quit.

This will invalidate everyone's repository if they pulled from that repository after you committed that large file. Make sure to be nice to them today.

wilhelmtell