An interface, here meaning the code construct and not the design abstraction, supports a basic principle of code design called "loose coupling". There are some more derived principles that tell you HOW code should be loosely coupled, but in the main, loose coupling helps allow changes to code to affect as small an area of the codebase as possible.
Consider, for example, a calculation of some arbitrary complexity. This calculation is used by 6 different classes, and so to avoid duplicating code, the calculation is encapsulated in its own class, Calculator. The 6 classes each contain a reference to a Calculator. Now, say that your customer comes to you and says that in one usage of Calculator, if certain conditions are met, a different calculation should be used instead. You might be tempted to simply put these two rules (usage rule and business rule) and the new calculation algorithm into the Calculator class, but if you do so, then two things will happen; first, you make Calculator aware of some implementation details (how it's used) outside of its scope, that it doesn't need to know and that can change again later. Second, the other 5 classes that use Calculator, which were working just fine as-is, will have to be recompiled since they reference the changed class, and will have to be tested to ensure you didn't break their functionality by changing the one for the 6th class.
The "proper" solution to this is an interface. By defining an interface ICalculator, that exposes the method(s) called by the other classes, you break the concrete dependence of the 6 classes on the specific class Calculator. Now, each of the 6 classes can have a reference to an ICalculator. On 5 of these classes, you provide the same Calculator class they've always had and work just fine with. On the 6th, you provide a special calculator that knows the additional rules. If you had done this from the beginning, you wouldn't have had to touch the other 5 classes to make the change to the 6th.
The basic point is, classes should not have to know the exact nature of other objects they depend on; they should instead only have to know what that object will do for them. By abstracting what the object DOES from what the object IS, multiple objects can do similar things, and the classes that require those things don't have to know the difference.
Loose coupling, along with "high cohesion" (objects should usually be specialists that know how to do a small, very highly-related set of tasks), is the foundation for most of the software design patterns you'll see as you progress into software development theory.
In contrast to a couple of answers, there are design methodologies (e.g. SOLID) that state that you should ALWAYS set up dependencies as abstractions, like an abstract base class or an interface, and NEVER have one class depend upon another concrete class. The logic here is that in commercial software development, the initial set of requirements for an application is very small, but it is a safe assumption, if not a guarantee, that the set of requirements will grow and change. When that happens, the software must grow. Creating even smaller applications according to strict design principles allows extending the software without causing the problems that are a natural consequence of bad design (large classes with lots of code, changes to one class affecting others in unpredictable ways, etc). However, the art of software development, and the time and money constraints of same, are such that you can (and have to) be smart and say "from what I know of the way this system will grow, this is an area that needs to be well-designed to allow adaptation, while this other section will almost surely never change". If those assumptions change, you can go back and refactor areas of code you designed very simply to be more robust before you extend that area. But, you have to be willing and able to go back and change the code after it's first implemented.