I think that the reply from kvb answers most of the concerns. However, I think that a more precise answer is that ranges are evaluated lazily at runtime. Here are some more details how ranges work...
When you use for example 1 .. 10
somewhere in your code, it is simply translated to some method call. The call depends on the context and numeric types used.
For [ 1 .. 10 ]
or other sequence expressions and for
loop, the compiler will generate something like RangeInt32(1, 1, 10)
(the additional parameter is the step).
When you have something like obj.[ 1 .. ]
and obj
is some object that supports slicing (e.g. matrix type), then it will be translated to obj.GetSlice(Some(1), None)
(note that in this case the upper/lower bound may be missing).
Now it is quite easy to answer your question - it is a method call that will be evaluated at runtime. However it is important to note that the whole range may not need to be evaluated! For example:
let nums = seq { 1 .. 10 } |> Seq.take 1
The sequence expression will be translated to a call to RangeInt32
. This will just return a value of type seq<int>
which is evaluated lazily. The call to take 1
takes only first element, so only the first number from the range will be needed and evaluated.
I don't think your own implementation of ranges could be any different than the standard one, however you can provide your implementation as a member of an object. Then you could write myObj.[1 .. 10]
(and the result could be any type you want). To do that, you'll need an instance method GetSlice
, which is in more detail discussed here.