The best you could do is write something that keeps a buffer (perhaps a Queue<T>
) of the data consumed from one and not the other (which would get messy/expensive if you advanced one iterator by 1M positions, but left the other alone). I really think you would be better off rethinking the design, though, and just using GetEnumerator()
(i.e. another foreach
) to start again - or buffer the data (if short) in a list/array/whatever.
Nothing elegant built in.
Update: perhaps an interesting alternative design here is "PushLINQ"; rather than clone the iterator, it allows multiple "things" to consume the same data-feed at the same time.
In this example (lifted from Jon's page) we calculate multiple aggregates in parallel:
// Create the data source to watch
DataProducer<Voter> voters = new DataProducer<Voter>();
// Add the aggregators
IFuture<int> total = voters.Count();
IFuture<int> adults = voters.Count(voter => voter.Age >= 18);
IFuture<int> children = voters.Where(voter => voter.Age < 18).Count();
IFuture<int> youngest = voters.Min(voter => voter.Age);
IFuture<int> oldest = voters.Select(voter => voter.Age).Max();
// Push all the data through
voters.ProduceAndEnd(Voter.AllVoters());
// Write out the results
Console.WriteLine("Total voters: {0}", total.Value);
Console.WriteLine("Adult voters: {0}", adults.Value);
Console.WriteLine("Child voters: {0}", children.Value);
Console.WriteLine("Youngest vote age: {0}", youngest.Value);
Console.WriteLine("Oldest voter age: {0}", oldest.Value);