Many years ago when I was at uni they said to put a capital i (I) in front of interfaces. Is this still a convention because I see many interfaces that do not follow this.
That is not typically done in Java - it's a C#/.NET thing.
Personally, I dislike it since I think it leaks information that should not be leaked. Code should be written agnostic as to whether an object is being handled via an interface or directly via a class API.
This is certainly the .NET convention, and Microsoft does this with their own interfaces in the .NET base class library. It was the Java convention when I did Java, and can't imagine it has changed, though I am not up to date with Java.
As an aside in C++ we also always used to prefix with an 'I' and indeed we always used to prefix classes with 'C'. We didn't carry this 'C' convention over to .NET.
It is a familiar programming convention, but its prevalence depends on the API. For example, it is generally not followed by the Java standard library, where things like collections are interfaces, but are named as their mathematical concepts. On the other hand, some important APIs like Eclipse use it consistently.
One argument I heard against using the prefix is that one is essentially placing a language issue (i.e., the dicothomy of interfaces and classes) into the naming scheme. Another is that "everything in a public API should be an interface and not a class anyway". Since many classes that implement interface are named "XImpl", one could argue that it may be superfluous. However, using the prefix may make sense if the type is merely a marker.
No, this is not convention. At least it isnt within the JDK. That said, if your shop has this as a convention, even though it might not be practice on the outside, I would suggest that you follow suit. Keeping consistency within a team is more important with regard to conventions.
In Java the convention is to try and end your interfaces in "able". Serializable, Cloneable, and so on. In .NET they begin with "I".
I'd stick with the approach that is standard within your language (i.e. try "able" extension). I disagree with Software Monkey that it "leaks information that should not be leaked". It's perfectly fine to have a name that is indicative of what type of thing it is, IMHO.
It's used quite a lot in the Eclipse Framework. It depends on individual project styles, but it is not standard convention. However, it certainly can help code maintenance and searching in some cases.
I, err, dislike this sort of thing. I worked with one Java product that did it and it was maddening.
Ultimately this kind of thing dates from FORTRAN-I in the 1950s when I,J,K,... up to I forget where were automatically integer and the rest of the alphabet was reals (floating-point).
Using an "I" prefix on interfaces is something COM founded (.NET just inherited this convention) and is not a standard in Java. Look at any of the JDK or other code developed by Sun and you won't see an I prefix. And it's not only Sun, most Java projects don't use the I prefix. Far from being a Java standard, the I prefix is an aberration adopted in some corners of the Java world though.
I wanted to mention that in the other way, a real implementation of an interface, you may find the postfix Impl or a whole package of name impl (example).
But this is'nt a standard anyway.
I wouldn't recommend using it. You never know if your interface become one day abstract class. Then you'd have to rename every single usage or just stick with ugly named abstract class with prefix.
(Source: Robert C. Martin - Agile Software Development: Principles, Patterns and Practices)
This sort of coding style is called Hungarian Notation, because it was invented by Charles Simonyi at Microsoft, who happens to be Hungarian. The purpose of Hungarian Notation is to encode semantic information that cannot be expressed inside the type system into identifier names.
However, Java's type system is perfectly capable of distinguishing between interfaces (just try to extend
one with a class), abstract classes (just try to instantiate one) and classes (just try to implement
one), and so are most IDEs. So, using Hungarian Notation in this way is completely useless.
It has never been a convention as far as I know, and it certainly isn't now. At least in the Java community. (There are some Java projects that use it, though. It's also sometimes used in C++, but there it makes sense, because there is no such thing as an interface in C++, so you have to have a way to mark them. It is also used on the CLI, where it makes absolutely no sense for the same reason as in Java.)