I got a bit of inspiration and thought about a solution with only speed in mind. This is really not that readable (which I usually preferre) but the characteristics when it comes to speed should be pretty good.
The once case is the same for most of the other implementations O(n) but it's highly unlikely since it would require all the first half of the elements to be the equal and the second half to all be equal but not equal to the value in the first half.
and would require the same number of comparisons as a linear search.
In most other cases with the first odd one in a random place this will require half as many comparisons as the linear.
In the case where the values are in pairs. So that item[0] == item[1] and item[2] == item[3] and item[0] != item[2] (and similar) then the linear search will be faster.
In general with either random data or few odd once this should be faster than a linear search
public static bool AllSame<T>(this IEnumerable<T> source,
IEqualityComparer<T> comparer = null)
{
if (source == null)
throw new ArgumentNullException("source cannot be null.", "source");
if (comparer == null)
comparer = EqualityComparer<T>.Default;
var enumerator = source.GetEnumerator();
return source.Zip(comparer);
}
private static bool Zip<T>(this IEnumerable<T> sequence, IEqualityComparer<T> comparer)
{
var result = new List<T>();
var enumerator = sequence.GetEnumerator();
while (enumerator.MoveNext())
{
var first = enumerator.Current;
result.Add(enumerator.Current);
if (enumerator.MoveNext())
{
if (!comparer.Equals(first, enumerator.Current))
{
return false;
}
}
else
{
break;
}
}
return result.Count == 1 ? true : result.Zip(comparer);
}
with out tail call optimization this uses extra memory (with a worst case of an amount of memory close to the amount of memory used for the original source). The call stack shouldn't get to deep though since no IEnumerable concrete implementations (at least that I'm aware of) can hold more than int.MaxValue elements. which would require a max of 31 recursions.