views:

3652

answers:

4

I'm implementing a COM interface that should return int values either S_OK or E_FAIL. I'm ok returning S_OK as I get that back from another call (Marshal.QueryInterface), but if I want to return a failure value what actual value do I use for E_FAIL?

(It's such a basic fundamental question that it's hard to find an answer to)

Assuming it's a specific number defined in the Win32 API, is there way to use it within .net code without declaring my own constant?

thanks!

Update (answered below):

Maybe I'm being a complete plonker, but I'm having problems with this. According to my Platform SDK, HRESULT is a LONG, which is a 32-bit signed integer, right? So possible values –2,147,483,648 to 2,147,483,647. But 0x80004005 = 2,147,500,037 which is > 2,147,483,647. What gives!?

This means when I try to put this in my code:

const int E_FAIL = 0x80004005;

I get a compiler error Cannot implicitly convert type 'uint' to 'int'.

Update 2:

I'm going to declare it like this:

const int E_FAIL = -2147467259;

because if I try to do something like this:

const UInt32 E_FAIL = 0x80004005;
return (Int32)E_FAIL;

I get a compiler error Constant value '2147500037' cannot be converted to a 'int' (use 'unchecked' syntax to override)

Phew! Who knew how tricky it would be to declare a standard return value.... Somewhere there must be a class lurking that I should have used like return Win32ReturnCodes.E_FAIL; ... sigh

ULTIMATE SOLUTION:

I now do this by getting the (massive but very useful) HRESULT enum from pinvoke.net and adding it to my solution. Then use it something like this:

return HRESULT.S_OK;
+3  A: 

E_FAIL is Hex 80004005 in WinError.h

You can see the full WinError.h file here. You don't have to install C++ just to see the values.

UPDATE:

The signed and unsigned versions of 0x80004005 are just two representations of the same bit mask. If you're getting a casting error then use the negative signed value. When casted to an UN signed long it will be the "correct" value. Test this yourself in C#, it'll work e.g.

This code

    static void Main(string[] args)
    {
        UInt32 us = 0x80004005;
        Int32 s = (Int32)us;

        Console.WriteLine("Unsigned {0}", us);
        Console.WriteLine("Signed {0}", s);
        Console.WriteLine("Signed as unsigned {0}", (UInt32)s);

        Console.ReadKey();
    }

will produce this output

  • Unsigned 2147500037
  • Signed -2147467259
  • Signed as unsigned 2147500037

So it's safe to use -2147467259 for the value of E_FAIL

Binary Worrier
thanks for all the help!
Rory
+6  A: 

From WinError.h for Win32

#define E_FAIL _HRESULT_TYPEDEF_(0x80004005L)

To find answers like this, use visual studio's file search to search the header files in the VC Include directory of your visual studio install directory.

C:\Program Files\Microsoft Visual Studio 9.0\VC\include
Rob Prouse
A: 

You should download the platform SDK (link)

I searched through the code and found the following in winerror.h

typedef long HRESULT;

#ifdef RC_INVOKED
#define _HRESULT_TYPEDEF_(_sc) _sc
#else // RC_INVOKED
#define _HRESULT_TYPEDEF_(_sc) ((HRESULT)_sc)
#endif // RC_INVOKED

#define E_FAIL _HRESULT_TYPEDEF_(0x80004005L)
#define S_OK  ((HRESULT)0x00000000L)

You should return E_FAIL instead of the value. Simply include winerror.h.

Cedrik
He has tagged his post C#, therefore he cannot include it and needs to define it himself, thus the question. This is very common in COM and PInvoke'ing.
Rob Prouse
A: 

I've used this:

const int EFail = int.MinValue + 0x00004005;

to compromise between readability (if you're used to the hex codes) and C#'s restriction.

cliffwi