views:

298

answers:

6
A: 

I have tried to show the two candidates for the expression t[1][1]. These are both of equal RANK (CONVERSION). Hence ambiguity

I think the catch here is that the built-in [] operator as per 13.6/13 is defined as

T& operator[](T*, ptrdiff_t);

On my system ptrdiff_t is defined as 'int' (does that explain x64 behavior?)

template<typename ptr_t> 
struct TData 
{ 
    typedef typename boost::remove_extent<ptr_t>::type value_type; 
    ptr_t data; 

    value_type & operator [] ( size_t id ) { return data[id]; } 
    operator ptr_t & () { return data; } 
}; 

typedef float (&ATYPE) [100][100];

int main( int argc, char ** argv ) 
{ 
    TData<float[100][100]> t;    

    t[size_t(1)][size_t(1)] = 5; // note the cast. This works now. No ambiguity as operator[] is preferred over built-in operator

    t[1][1] = 5;                 // error, as per the logic given below for Candidate 1 and Candidate 2

    // Candidate 1 (CONVERSION rank)
    // User defined conversion from 'TData' to float array
    (t.operator[](1))[1] = 5;

    // Candidate 2 (CONVERSION rank)
    // User defined conversion from 'TData' to ATYPE
    (t.operator ATYPE())[1][1] = 6;

    return 0; 
}

EDIT:

Here is what I think:

For candidate 1 (operator []) the conversion sequence S1 is User defined conversion - Standard Conversion (int to size_t)

For candidate 2, the conversion sequence S2 is User defined conversion -> int to ptrdiff_t (for first argument) -> int to ptrdiff_t (for second argument)

The conversion sequence S1 is a subset of S2 and is supposed to be better. But here is the catch...

Here the below quote from Standard should help.

$13.3.3.2/3 states - Standard conversion sequence S1 is a better conversion sequence than standard conversion sequence S2 if — S1 is a proper subsequence of S2 (comparing the conversion sequences in the canonical form defined by 13.3.3.1.1, excluding any Lvalue Transformation; the identity conversion sequence is considered to be a subsequence of any non-identity conversion sequence) or, if not that...

$13.3.3.2 states- " User-defined conversion sequence U1 is a better conversion sequence than another user-defined conversion sequence U2 if they contain the same user-defined conversion function or constructor and if the second standard conversion sequence of U1 is better than the second standard conversion sequence of U2."

Here the first part of the and condition "if they contain the same user-defined conversion function or constructor" does not hold good. So, even if the second part of the and condition "if the second standard conversion sequence of U1 is better than the second standard conversion sequence of U2." holds good, neither S1 nor S2 is preferred over the other.

That's why gcc's phantom error message "ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second"

This explains the ambiguity quiet well IMHO

Chubsdad
There is no `Convert argument 5 to size_t`. And what `User defined conversion` did you mean for `Candidate 1`?
Kirill V. Lyadvinsky
The conversion sequences that you have provided are wrong.
David Rodríguez - dribeas
't' is being converted to a floating point array in Candidate 1 using the class member conversion operator []. I get the point about 5 and 6 not getting converted to size_t. I will change the posting shortly
Chubsdad
`TData::operator[]` does not *convert* *t*, it *returns* a subelement of the member `data`.
David Rodríguez - dribeas
@David: Isn't that conversion e.g. A::operator int(){return m_data;}, we say that it converts A to 'int'.
Chubsdad
@chubsdad: Not really, `operator type()` are conversion operators and are special things, different from `operator@` where @ is a non-type. In general conversion operator `T1::operator T2` is a conversion because given a variable `v` of type `T1`, it can be used in a context where type `T2` is required as in `foo(v)` with `foo` defined as `void foo(T2)`, somehow *converting* the element to match `T2`. Note that there is no call to the operator. On the other hand, `operator[]` is an actual method call that the user must perform. It is not different (besides the call syntax) from other methods.
David Rodríguez - dribeas
@David: Can you give an example of operator@ where @ is a non-type?
Chubsdad
Any of `operator+`, `operator-`, `operator[]`, `operator()`... Only those that are `operator type` are conversion operators. The declaration/definition syntax differs in that the return type is implicitly the type of the operator, and they will be called in the same situations where non-user defined conversions kick in (implicitly, `static_cast`...)
David Rodríguez - dribeas
I get it. Thanks.
Chubsdad
@chubsdad since i saw you are quite interested in overload resolution in many of your posts, you might consider getting your hands on "C++ Templates - The Complete Guide". It contains an addendum about overload resolution, and also contains this very problem and explains it.
Johannes Schaub - litb
@litb: Thanks. Let me dust away the rust.
Chubsdad
A: 

I don't know what's the exact answer, but...

Because of this operator:

operator ptr_t & () { return data; }

there exist already built-in [] operator (array subscription) which accepts size_t as index. So we have two [] operators, the built-in and defined by you. Booth accepts size_t so this is considered as illegal overload probably.

//EDIT
this should work as you intended

template<typename ptr_t>
struct TData
{
    ptr_t data;
    operator ptr_t & () { return data; }
};
adf88
Your `//EDIT` doesn't work in Intel C++.
Kirill V. Lyadvinsky
A: 

It seems to me that with

t[1][1] = 5;

the compiler has to choose between.

value_type & operator [] ( size_t id ) { return data[id]; }

which would match if the int literal were to be converted to size_t, or

operator ptr_t & () { return data; }

followed by normal array indexing, in which case the type of the index matches exactly.


As to the error, it seems GCC as a compiler extension would like to choose the first overload for you, and you are compiling with the -pedantic and/or -Werror flag which forces it to stick to the word of the standard.

(I'm not in a -pedantic mood, so no quotes from the standard, especially on this topic.)

UncleBens
Why in the second case there is no `int` to `size_t` conversion? What type for `normal array indexing` is required? I thought it should be `size_t` too.
Kirill V. Lyadvinsky
@Kirill V. Lyadvinsky: No, there is a built in `operator[]` for every combination of a pointer type and an integer type. An `int` works without the need for conversion.
Charles Bailey
@Charles: 13.6/13: "there exist candidate operator functions of the form … `T`" So no, `ptrdiff_t` (not size_t) is special. Both `size_t` and `int` require a conversion. (What's more relevant here, though, is the lack of a conversion in calling Kirill's "fix" overload.)
Potatoswatter
@Potatoswatter: I stand corrected. I was looking at the definition of additive operators which doesn't make a distinction between integer types in "pointer plus integer" cases. I didn't know that the section on overload resolution made ptrdiff_t special.
Charles Bailey
@Potatoswatter: Although `int` might not require conversion; as far as I can see `ptrdiff_t` could validly be a typedef for `int` on some implementations.
Charles Bailey
@Charles: `ptrdiff_t` is more likely to be `long`, which is distinct from `int` even if it's the same size.
Potatoswatter
+3  A: 

With the expression:

t[1][1] = 5;

The compiler must focus on the left hand side to determine what goes there, so the = 5; is ignored until the lhs is resolved. Leaving us with the expression: t[1][1], which represents two operations, with the second one taking the result from the first one, so the compiler must only take into account the first part of the expression: t[1].The actual type is (TData&)[(int)]

The call does not match exactly any functions, as operator[] for TData is defined as taking a size_t argument, so to be able to use it the compiler would have to convert 1 from int to size_t with an implicit conversion. That is the first choice. Now, another possible path is applying user defined conversion to convert TData<float[100][100]> into float[100][100].

The int to size_t conversion is an integral conversion and is ranked as Conversion in Table 9 of the standard, as is the user defined conversion from TData<float[100][100]> to float[100][100] conversion according to §13.3.3.1.2/4. The conversion from float [100][100]& to float (*)[100] is ranked as Exact Match in Table 9. The compiler is not allowed to choose from those two conversion sequences.

Q1: Not all compilers adhere to the standard in the same way. It is quite common to find out that in some specific cases a compiler will perform differently than the others. In this case, the g++ implementors decided to whine about the standard not allowing the compiler to choose, while the Intel implementors probably just silently applied their preferred conversion.

Q2: When you change the signature of the user defined operator[], the argument matches exactly the passed in type. t[1] is a perfect match for t.operator[](1) with no conversions whatsoever, so the compiler must follow that path.

David Rodríguez - dribeas
+1: Very good explanation.
Gorpik
I think that you can make this answer simpler in the second paragraph. You don't have to look at the second `[]` at all. You only need to consider how `t[1]` is interpreted. Type types are `TData` (lvalue) and `int` (rvalue).
Charles Bailey
@Charles: I realized that the second pair of `[]` did not account for, since it depends on the first result.
David Rodríguez - dribeas
Table 9 is only for Standard Conversion sequences - "Each conversion in Table 9 also has an associated rank (Exact Match, Promotion, or Conversion). These are used to rank standard conversion sequences (13.3.3.2).". So does the logic really hold good?
Chubsdad
13.3.3.1.2/4 refers to base class conversions. `float[100][100]` is not a base class of `TData<>` so that's a red herring.
Potatoswatter
Also, @chubsdad is correct: the conversion function is a user-defined conversion whereas `int` to `size_t` is a standard conversion, which is preferable according to 13.3.3.1/3.
Potatoswatter
However, despite going through all this work to prove that option #1 is better than option #2, I don't think this proves anything, since the compiler already told us that. The error message refers to "ISO C++", seemingly implying there is some phantom text which undoes the significance of this ranking.
Potatoswatter
Have updated my post about reason behind ambiguity
Chubsdad
A: 

Overload resolution is a headache. But since you stumbled on a fix (eliminate conversion of the index operand to operator[]) which is too specific to the example (literals are type int but most variables you'll be using aren't), maybe you can generalize it:

template< typename IT>
typename boost::enable_if< typename boost::is_integral< IT >::type, value_type & >::type
operator [] ( IT id ) { return data[id]; }

Unfortunately I can't test this because GCC 4.2.1 and 4.5 accept your example without complaint under --pedantic. Which really raises the question whether it's a compiler bug or not.

Also, once I eliminated the Boost dependency, it passed Comeau.

Potatoswatter
Instead of `int` or `size_t`, he can use `ptrdiff_t` and then it will work with any Standard conforming compiler.
Johannes Schaub - litb
@Johannes: The point is allowing either without conversion… would you care to weigh in on the issue?
Potatoswatter
@Potatoswatter well, conversions will not hurt. If you use `ptrdiff_t`, then whatever argument the user passes, on a conforming compiler he will never get ambiguities, because the builtin operator uses `ptrdiff_t` too (index can be negative). This is why i tend to use `ptrdiff_t` in my classes' index operators.
Johannes Schaub - litb
@Johannes: Are you sure the error message Kirill got refers to multiple standard conversion paths? Anyway, I don't really pretend to understand the rules. I'm just working from the empirical observation that eliminating the standard conversion fixed the error.
Potatoswatter
@Potatoswatter i have added an example to my answer, which is easier to understand without all the operators rules around, i think.
Johannes Schaub - litb
+2  A: 

It's actually quite straight forward. For t[1], overload resolution has these candidates:

Candidate 1 (builtin: 13.6/13) (T being some arbitrary object type):

  • Parameter list: (T*, ptrdiff_t)

Candidate 2 (your operator)

  • Parameter list: (TData<float[100][100]>&, something unsigned)

The argument list is given by 13.3.1.2/6:

The set of candidate functions for overload resolution is the union of the member candidates, the non-member candidates, and the built-in candidates. The argument list contains all of the operands of the operator.

  • Argument list: (TData<float[100][100]>, int)

You see that the first argument matches the first parameter of Candidate 2 exactly. But it needs a user defined conversion for the first parameter of Candidate 1. So for the first parameter, the second candidate wins.

You also see that the outcome of the second position depends. Let's make some assumptions and see what we get:

  1. ptrdiff_t is int: The first candidate wins, because it has an exact match, while the second candidate requires an integral conversion.
  2. ptrdiff_t is long: Neither candidate wins, because both require an integral conversion.

Now, 13.3.3/1 says

Let ICSi(F) denote the implicit conversion sequence that converts the i-th argument in the list to the type of the i-th parameter of viable function F.

A viable function F1 is defined to be a better function than another viable function F2 if for all arguments i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then ... for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2), or, if not that ...

For our first assumption, we don't get an overall winner, because Candidate 2 wins for the first parameter, and Candidate 1 wins for the second parameter. I call it the criss-cross. For our second assumption, the Candidate 2 wins overall, because neither parameter had a worse conversion, but the first parameter had a better conversion.

For the first assumption, it does not matter that the integral conversion (int to unsigned) in the second parameter is less of an evil than the user defined conversion of the other candidate in the first parameter. In the criss-cross, rules are crude.


That last point might still confuse you, because of all the fuss around, so let's make an example

void f(int, int) { }
void f(long, char) { }

int main() { f(0, 'a'); }

This gives you the same confusing GCC warning (which, I remember, was actually confusing the hell out of me when I first received it some years ago), because 0 converts to long worse than 'a' to int - yet you get an ambiguity, because you are in a criss-cross situation.

Johannes Schaub - litb