I have several csv's that look like this:
I have several large text files (csv's) that on some lines have redundant entries. That is, due to the way they were merged a certain field will often have the same value twice or three times. It's not always in the same order though.
BWTL, NEWSLETTER, NEWSLETTER
BWTL, NEWSLETTER, R2R, NEWSLETTER
MPWJ, OOTA HOST, OOTA HOST, OOTA HOST
OOTA HOST, ITOS, OOTA HOST
Etc. The entries that are next to each other are easy enough to clean up with sed
sed -i "" 's/NEWSLETTER, NEWSLETTER/NEWSLETTER/g' *.csv
Is there a similar quick way to fix up the other duplicates?