I have a file with one column. How to delete repeated lines in a file?
If you're on *nix, try running the following command:
sort <file name> | uniq
On Unix/Linux, use the uniq
command, as per David Locke's answer, or sort
, as per William Pursell's comment.
If you need a Python script:
lines_seen = set() # holds lines already seen
outfile = open(outfilename, "w")
for line in open(infilename, "r"):
if line not in lines_seen: # not a duplicate
outfile.write(line)
lines_seen.add(line)
outfile.close()
Update: The sort
/uniq
combination will remove duplicates but return a file with the lines sorted, which may or may not be what you want. The Python script above won't reorder lines, but just drop duplicates. Of course, to get the script above to sort as well, just leave out the outfile.write(line)
and instead, immediately after the loop, do outfile.writelines(sorted(lines_seen))
.
uniqlines = set(open('/tmp/foo').readlines())
this will give you the list of unique lines.
writing that back to some file would be as easy as:
bar = open('/tmp/bar', 'w').writelines(set(uniqlines))
bar.close()
get all your lines in the list and make a set of lines and you are done. for example,
>>> x = ["line1","line2","line3","line2","line1"]
>>> list(set(x))
['line3', 'line2', 'line1']
>>>
and write the content back to the file.