views:

1379

answers:

16

This is a subjective thing of course, but I don't see anything positive in prefixing interface names with an 'I'. To me, Thing is practically always more readable than IThing.

My question then is, why does this convention exist anyway? Indeed, it makes easier to identify interfaces among all types. But wouldn't a similar argument extend to retaining the Hungarian notation, which is now widely censured?

What's your argument for placing that awkward 'I' there? (Or, for that matter, what could be Microsoft's?)

+11  A: 

The reason I do it is simple: because that's the convention. I'd rather just follow it than have all my code look different, making it harder to read and learn.

Jon B
How about following the I-less convention in _all_ your code? (I think I'm gonna do precisely that in my next project.)
Frederick
I'd be sorely pissed off if I was looking for interfaces in your project and spent a good ten minutes longer than needed.
Will
You could do that, but I'd like all my code to look like everyone else's code (including MS). Everyone already knows that "I" means interface, so it makes it easier for others to understand my code.
Jon B
Add to that parity with the BCL...
Marc Gravell
Conventions aren't generally enforcible by compilers; they're usually enforced by your fellow developers cursing you when you don't follow them.
Will
By the way, my question is more like, why did this convention come into being in the first place, not why people should follow it now. Eventual critical mass of followers can hardly justify initial institution of the idea.
Frederick
@Frederick - We know why it's useful and why we should follow it, but unless we can find the person who invented it, I think we can only guess at why it came about.
Jon B
"Eventual critical mass of followers can hardly justify initial institution of the idea" - true, but on this occasion I suspect it genuinely is useful, not just a legacy.
Marc Gravell
That's circular logic Jon. "Because that's the way it's done" might be good enough reason for a lot of folks, but some of us like to believe in what we do.
T.E.D.
@ted - my logic is that you should follow this convention because it will improve code readability and maintainability. I don't think that's circular.
Jon B
+5  A: 

I think it is better than adding a "Impl" suffix on your concrete class. It is a single letter, and this convention is well established. Of course you are free to use any naming you wish.

Otávio Décio
An possible exception to this is where the interface class is exposed to the outside and the implementation is internal. For example in WCF. In that case I prefer exposing a service 'MyService' and having a 'MyServiceImpl' class than exposing something call 'IMyService' to the outside
Rob Walker
@Rob Walker - you can do that in the [ServiceContract(Name="MyService)]
Marc Gravell
I think "Impl" is a Java convention, not C#, but some shops do carry this over into C#. Not a bad idea though, just as long as it's consistent.
Jon Limjap
A: 

To separate interfaces from classes.

Also (this is more of a personal observation than dictated from upon high), interfaces describe what a class does. The 'I' lends itself to this (I'm sure it is a construct in grammar which would be great to whip out right now); an interface that describes classes that validate would be "IValidate". One that describes matching behavior would be "IMatch".

Will
Even though an interface describes what a class "does", it shouldn't be described as a verb. It's still a thing. It's IValidatable or IMatchable.
Dave Van den Eynde
IDontAgreeWithYou
Will
Well, suit yourself. But if you look at the .NET Framework, you'll know what IMean.
Dave Van den Eynde
IUnderstand and ICouldDoThisAllDay
Will
+12  A: 

Well, one obvious consideration would be the (very common) IFoo and Foo pair (when abstracting Foo), but more generally it is often fundamental to know whether something is an interface vs class. Yes it is partly redundant, but IMO is is different from things like sCustomerName - here, the name itself (customerName) should be enough to understand the variable.

But with CustomerRepository - it that a class, or the abstract interface?

Also: expectation; the fact is, right or wrong, that is what people expect. That is almost reason enough.

Marc Gravell
Frankly I'm sceptical about the "almost" :)
annakata
Yeah. It *is* reason enough.
Jim Mischel
"But with CustomerRepository - it that a class, or the abstract interface?" Well, in Visual Studio at least, just hover over it. Isn't this the case where coding tools can help us better then redundant conventions? Wasn't the Hungarian notation, too, killed by the Visual-Studio-hover?
Frederick
@Frederick - I read with my eyes, not the mouse... if I can't undestand a line by looking at it, then the line is automatically wrong.
Marc Gravell
Well then Why do you _care_ whether it's an interface or a class, especially if you are writing client code? Not caring: isn't that the very purpose of abstraction, and hence OOP?
Frederick
Knowing whether I am coding against the concrete or the abstract is an important for the abstraction to be useful.
Marc Gravell
And you should care. At least, between interface and class.
Marc Gravell
The hover will tell you. The 'Go to declaration' will gleefully assist. The compiler will eventually correct you. So there's no question of a something going wrong. The question really is do I need to know the difference in the very first glance?
Frederick
Between an interface **type** and class **type**? Then yes.
Marc Gravell
What is specific about Foo that makes it different from any other possible implementation of IFoo? If Foo is the abstract type, name the concrete implementation(s) to reflect that they are a specific, concrete type.
Adam Jaskiewicz
The point, though, is that if you are trying to write an API that works with interfaces, **any** classes are a problem, since .NET doesn't allow multiple (class) inheritance. For interfaces this isn't an issue... (cont)
Marc Gravell
... As such, there is a fundamental difference between SomeMethod(Foo foo) and SomeMethod(IFoo) if (as is often desirable) you are trying to write an API that works with the interface. This is easy enough to get wrong, without ambiguity over what is an interface / what is a class. Keep it obvious.
Marc Gravell
I think SomeMethod(FlatFileFoo foo), SomeMethod(XMLFoo foo), etc.; vs. SomeMethod(Foo foo) makes it pretty obvious.
Adam Jaskiewicz
Okay, what if Visual Studio color coded interfaces differently from classes? Wouldn't that be enough to make the distinction evident? If so, isn't that what Microsoft should've offered in the first place instead of propagating communication-hindering conventions?
Frederick
I don't think we're ever going to agree here. I don't see a "communication-hindering convention"...
Marc Gravell
A: 

The fact of the matter is that everyone understands it and part of writing better code is making it easy to read and understand.

lexx
+5  A: 

There is nothing wrong with NOT using I convention for interfaces - just be consistent and make sure it works not just for you but for whole team (if there is one).

grigory
+2  A: 

Because you usually have an IThing and a Thing. So instead of letting people come with their own "conventions" for this recurring situation, a uniform one-size-fits all convention was chosen. Echoing what others say, the de facto standardness is reason enough to use it.

Kurt Schelfthout
+2  A: 

Do prefix interface names with the letter I to indicate that the type is an interface.

The guideline doesn't explain why you should use the I prefix, but the fact that this is now an established convention should be reason enough.

What do you have to gain by dropping the I prefix?

LukeH
"What do you have to gain by dropping the I prefix?" Readability.
Frederick
@Frederick, You *lose* readability by dropping the I. You can no longer see at-a-glance whether something is an interface or a class.
LukeH
+2  A: 

It's just a convention that's intent is to prevent name collisions. C# does not allow me to have a class and an interface named Client, even if the file names are Client and IClient, respectively. I'm comfortable using the convention; if I had to offer a different convention I'd suggest using "Contract" as a suffix, e.g. ClientContract.

Jamie Ide
+1  A: 

It looks Hungarianish to me. Hungarian is generally considered a menace in strongly-typed languages.

Since C# is a Microsoft product and Hungarian notation was a Microsoft invention, I can see where C# might be succeptable to its influence.

T.E.D.
Using hungarian notation for data type is pretty useless in a strongly typed language. However the original intention for hungarian notation wasn't even to be used to indicate data type, but to indicate other, more crucial properties of the data, like horisontal vs. vertical coordinates. :)
Guffa
+18  A: 

Conventions (and criticism against them) all have a reason behind them, so let's run down some reasons behind conventions

  • Interfaces are prefixed as I to differentiate interface types from implementations - e.g., as mentioned above there needs to be an easy way to distinguish between Thing and its interface IThing so the convention serves to this end.

  • Interfaces are prefixed I to differentiate it from abstract classes - There is ambiguity when you see the following code:

    public class Apple: Fruit

    Without the convention one wouldn't know if Apple was inheriting from another class named Fruit, or if it were an implementation of an interface named Fruit, whereas IFruit will make this obvious:

    public class Apple: IFruit

    Principle of least surprise applies.

  • Not all uses of hungarian notation are censured - Early uses of Hungarian notation signified a prefix which indicated the type of the object and then followed by the variable name or sometimes an underscore before the variable name. This was, for certain programming environments (think Visual Basic 4 - 6) useful but as true object-oriented programming grew in popularity it became impractical and redundant to specify the type. This became especially issue when it came to intellisense.

    Today hungarian notation is acceptable to distinguish UI elements from actual data and similarly associated UI elements, e.g., txtObject for a textbox, lblObject for the label that is associated with that textbox, while the data for the textbox is simply Object.

    I also have to point out that the original use of Hungarian notation wasn't for specifying data types (called System Hungarian Notation) but rather, specifying the semantic use of a variable name (called Apps Hungarian Notation). Read more on it on the wikipedia entry on Hungarian Notation.

Jon Limjap
Also, interfaces probably feel bad because they don't really do anything. Adding the I makes up for that by making them feel a lot cooler. Same thing is done with IPod, IMac, IPhone, etc. The I makes them look cooler.
Svish
LOL! Of course interfaces serve a greater purpose than that, Svish.
Jon Limjap
+1 for "making them feel a lot cooler"
Ricardo
The I is also a useful memonic for the HAS-A property that most interfaces represent - `IHasData`, `IHasCookie`, `IHasCheezburger`...
thecoop
+1  A: 

I don't know exactly why they chose that convention, perhaps partly thinking of ensouling the class with "I" as in "I am Enumerable".

A naming convention that would be more in the line of the rest of the framework would be to incorporate the type in the name, as for example the xxxAttribute and xxxException classes, making it xxxInterface. That's a bit lengthy though, and after all the interfaces is something separate, not just another bunch of classes.

Guffa
A: 

I don't really like this convention. I understand that it helps out with the case when you have an interface and an implementation that would have the same name, but I just find it ugly. I'd still follow it if it were the convention where I am working, of course. Consistency is the point of conventions, and consistency is a very good thing.

I like to have an interface describe what the interface does in as generic a way as possible, for example, Validator. A specific implementation that validates a particular thing would be a ThingValidator, and an implementation with some abstract functionality shared by Validators would be an AbstractValidator. I would do this even if Thing is the only... well... thing that I'm validating, and Validator would be generic.

In cases where only one concrete class makes sense for an interface, I still try to describe something specific about that particular implementation rather than naming the interface differently to prevent a names collision. After all, I'm going to be typing the name of the interface more often than the name of the implementation.

Adam Jaskiewicz
+1  A: 

I know the Microsoft guidelines recommends using the 'I' to describe it as an interface. But this comes from IBM naming conventions if I'm not remember wrong, the initiating 'I' for interfaces and the succeeding *Impl for the implementations.

However, in my opinion the Java Naming Conventions is a better choice than the IBM naming convention (and not only in Java, for C# as well and any OO programming language). Interfaces describes what an object can be able to do if it implements the interface and the description should be in verb form. I.e Runnable, Serializable, Invoiceable, etc. IMHO this is a perfect description of what the interface represents.

Björn
+5  A: 

Thing is more readable name than IThing. I'm from the school of thought that we should program to interfaces rather than specific implementations. So generally speaking, interfaces should have priority over implementations. I prefer to give the more readable name to the interface rather than the implementation (i.e., my interfaces are named without the 'I' prefix).

0sumgain
+2  A: 

In my opinion this 'I' is just visual noise. IDE should show class and interface names differently. Fourtunately Java standard library doesn't use this convention.

Aivar