I have a text file that contains a long list of entries (one on each line). Some of these are duplicates, and I would like to know if it is possible (and if so, how) to remove any duplicates. I am interested in doing this from within vi/vim, if possible.
Try this:
:%s/^\(.*\)\n\1$/\1/
Make a copy though before you try it. It's untested.
Select the lines in visual-line mode (shift-v), then :!uniq
. That'll only catch duplicates which come one after another.
I would use !}uniq, but that only works if there are no blank lines. For every line in a file, :1,$!uniq
I would combine two of the answers above:
go to head of file
sort the whole file
remove duplicate entries with uniq
1G
!Gsort
1G
!Guniq
If you were interested in seeing how many duplicate lines were removed, use control-G before and after to check on the number of lines present in your buffer.
Regarding how Uniq can be implemented in VimL, search for Uniq in a plugin I'm maintaining. You'll see various ways to implement it that were given on Vim mailing-list.
Otherwise, :sort u
is indeed the way to go.
g/^\(.*\)$\n\1/d
Works for me on Windows. Lines must be sorted first though.