tags:

views:

461

answers:

9

What is an efficient algorithm to remove all duplicates in a string? For example if I have aaaabbbccdbdbcd, I will get back abcd.

+14  A: 

You use a hashtable to store currently discovered keys (access O(1)) and then loop through the array. If a character is in the hashtable, discard it. If it isn't add it to the hashtable and a result string.

Overall: O(n) time (and space).

The naive solution is to search for the character is the result string as you process each one. That O(n2).

cletus
+1, Or if they have accss to it HashSet http://msdn.microsoft.com/en-us/library/bb495294.aspx
astander
If you have a large string compared the possible # vakues of the characters (eg like if it is ASCII), you might use a =n array of bools instead on a hashtable
Ritsaert Hornstra
The best case to retrieve a value from hashtable is O(1) and the worst case O(n). The overall worst case complexity for the algorithm is O(n^2).
Thomas Jung
that is irrelavent in this case as you by definition of the algo have either 0 or 1 item for each hash key
jk
@jk The hashtable has always 0 or 1 entries for a key. The worst case O(n) is that all n values are in one bucket.
Thomas Jung
@Thomas Jung: For this problem computing a Perfect Hashing function is easy (typically the ASCII or at worst the Unicode Code Point value) therefore you perform access in `O(1)`.
Matthieu M.
@Matthieu Not Exactly. Suppose you have perfect hash function from char -> 2 byte Int. This is easy. Now your Hashtable size is smaller than 2^16. Say 15. When you enter 2 values it is quite probable that you will have a collision (1/15 for the second value). The index is some complicated version of idx = hash % size. If you want absolutely no collisions you have to create a hashtable of size 2^16.
Thomas Jung
@Matthieu I've realized that I've cut corners a bit. You can of course create a perfect hash function for hashtables with size < 2^16. Cuckoo hashing has for example O(1) worst case access complexity but worst case O(n) for puts. I suppose there is no hashtable that has worst case complexity of O(1) for all operations.
Thomas Jung
It depends on the input size: if you can manage to have an upper-bound for the size of a bucket, then you can always pretend to be `O(1)` even though it could be daunting :x Here it seems easy enough for ASCII charachters (256 of them) and of course a bit more difficult if you wish to take all the Unicode Points into account, yet with a sufficiently big bitset you could have good performance without too much memory (server-scale)
Matthieu M.
+1  A: 

Keep an array of 256 "seen" booleans, one for each possible character. Stream your string. If you haven't seen the character before, output it and set the "seen" flag for that character.

SPWorley
It has not been told what coding is used, though
skwllsp
A: 
  string newString = new string("aaaaabbbbccccdddddd".ToCharArray().Distinct().ToArray());   

or

 char[] characters = "aaaabbbccddd".ToCharArray();
                string result = string.Empty ;
                foreach (char c in characters)
                {
                    if (result.IndexOf(c) < 0)
                        result += c.ToString();
                }
Amgad Fahmi
O(n^2) isn't very efficient... (once the data set gets big enough). For small strings this is probably faster than a hashset based lookup though
Rob Fonseca-Ensor
i agree , what about the new one ?
Amgad Fahmi
String concatenation in a loop will be slower than the searching of the character within the string...
ck
+2  A: 

In Python

>>> ''.join(set("aaaabbbccdbdbcd"))
'acbd'

If the order needs to be preserved

>>> q="aaaabbbccdbdbcd"                    # this one is not
>>> ''.join(sorted(set(q),key=q.index))    # so efficient
'abcd'

or

>>> S=set()
>>> res=""
>>> for c in "aaaabbbccdbdbcd":
...  if c not in S:
...   res+=c
...   S.add(c)
... 
>>> res
'abcd'

or

>>> S=set()
>>> L=[]
>>> for c in "aaaabbbccdbdbcd":
...  if c not in S:
...   L.append(c)
...   S.add(c)
... 
>>> ''.join(L)
'abcd'

In python3.1

>>> from collections import OrderedDict
>>> ''.join(list(OrderedDict((c,0) for c in "aaaabbbccdbdbcd").keys()))
'abcd'
gnibbler
I knew set would be awesome for this, but I'm new to python and was trying to figure out how to join them while you posted this... Now I know!
Carson Myers
This doesn't preserve order.
recursive
@recursive, I added some order preserving options
gnibbler
+3  A: 

This closely related to the question: Detecting repetition with infinite input.

The hashtable approach may not be optimal depending on your input. Hashtables have a certain amount of overhead (buckets, entry objects). It is huge overhead compared to the actual stored char. (If you target environment is Java it is even worse as the HashMap is of type Map<Character,?>.) The worse case runtime for a Hashtable access is O(n) due to collisions.

You need only 8kb too represent all 2-byte unicode characters in a plain BitSet. This may be optimized if your input character set is more restricted or by using a compressed BitSets (as long as you have a sparse BitSet). The runtime performance will be favorable for a BitSet it is O(1).

Thomas Jung
I am afraid to mention that you are mixing (somehow) concepts and implementations. I view the fact that you are using a `BitSet` to implement your own `HashTable` as a proof that the `HashTable` is a perfectly viable solution.
Matthieu M.
@Matthieu Using a Hashtable or a BitSet has certain trade-offs. The hashtable works best for small sets of characters. The BitSet works best when the number of characters is large or can be restricted to a known range. A BitSet is not a Hashtable. The Hashtable here is used as a Set as someone mentioned. The BitSet is used analogous. If you can replace one with the other does not mean that they are equally good solutions.
Thomas Jung
A: 

In C++, you'd probably use an std::set:

std::string input("aaaabbbccddd");
std::set<char> unique_chars(input.begin(), input.end());

In theory you could use std::unordered_set instead of std::set, which should give O(N) expected overall complexity (though O(N2) worst case), where this one is O(N lg M) (where N=number of total characters, M=number of unique characters). Unless you have long strings with a lot of unique characters, this version will probably be faster though.

Jerry Coffin
A: 

You can sort the string and then remove the duplicate characters.

#include <iostream>
#include <algorithm>
#include <string>

int main()
{
    std::string s = "aaaabbbccdbdbcd";

    std::sort(s.begin(), s.end());
    s.erase(std::unique(s.begin(), s.end()), s.end());

    std::cout << s << std::endl;
}
FredOverflow
A: 

This sounds like a perfect use for automata.

jbrennan
A: 

C++ - O(n) time, O(1) space, and the output is sorted.

std::string characters = "aaaabbbccddd";
std::vector<bool> seen(std::numeric_limits<char>::max()-std::numeric_limits<char>::min());

for(std::string::iterator it = characters.begin(), endIt = characters.end(); it != endIt; ++it) {
  seen[(*it)-std::numeric_limits<char>::min()] = true;
}

characters = "";
for(char ch = std::numeric_limits<char>::min(); ch != std::numeric_limits<char>::max(); ++ch) {
  if( seen[ch-std::numeric_limits<char>::min()] ) {
    characters += ch;
  }
}
Joe Gauterin