I am looking for a better explanation on prototype implementation VS real implementation, w.r.t. Wikipedia. How real is the former? I mean is it actually implemented in a language or just some cases mapping?
It really depends one what your goal is. Prototypes are about speed and reducing risks due to uncertainties.
So for instance a new webpage might be prototyped in a flash page, where all the inputs and outputs are fake to give the feel of the application. Or you may prototype a connection with an unknown software or middleware just to see how hard it is to get something functional.
Often a prototype would be classed as a "hack" where nasty shortcuts can be taken to get something functional, as the goal of the prototype isn't a working or sellable product, but to see if such a product could be reasonably created.
A prototype implementation is a set of code that shows off the functionality you're trying to achieve, without all the plumbing necessary to make it full functional. There's generally little or no actual architectural design in a prototype. The mantra is, "Just throw enough together to demo to the client."
If your software is a Windows GUI, then you may use C# and the Winforms framework to actually create a GUI, but not go through the effort to tie it to a database or write code that makes it functional -- if you do write code, you normally hardcode everything into the source code.
A "Real" implementation is an implementation that is fully functional on its own, and is the designed product.
I would say a prototype implementation is an actual implementation that is sufficient to demonstrate the features of the product that you are developing (especially, what makes it differ from other similar products).
This often comes at the expense of other features: there will be short-cuts, bugs, other things simply not implemented.
The degree of completion (compared with a full production implementation) will depend on the expectations of the stakeholders to whom you're showing the prototype.
Projects have risks; prototypes are an exercise in risk-management. The goal of your prototype should be to eliminate an element of high risk in your project.
For many projects, there is a risk to the customer that the development team will produce something that isn't useful, and a risk to the development team that the customer will change what they want when they see it. A mock-up showing what the functionality will look like will help reduce these risks, by giving the customer a clearer view of the final product. (This is such a common problem, that some people assume that is the only sort of prototype.)
Other prototypes can address different risks, though. I have worked on projects where the prototypes were architectural, proving that the communications would occur as expected (but the messages being communicated were nonsense), or performance - proving the hardware was capable of handling the level of processing required (but the processing performed was nonsense).
Prototypes don't need to be real code - in some cases, a prototype can be a paper model (e.g. showing the look of pages of a web-site).
Typically, if there is code in a prototype, the code is expected to be thrown away. (Fred Brooks, in Mythical Man Month, classically explained: "plan to throw one away; you will, anyhow.")
Sometimes, it is planned to keep the prototype, evolving the code slowly, until all the hastily-written, low-quality code has been replaced. This is called Evolutionary Prototyping.
Sometimes, dangerously, the plan is to throw the prototype away, but it is kept anyway. The risk is that low-quality, conceptual code appears in the final product and provides a maintenance issue.
Oddthinking and the others describe the ideals of prototyping perfectly, and I wouldn't dispute any of that. But I do think it's worth putting the cynical viewpoint, if only as a warning:
A prototype implementation can often be little more than an excuse to skimp on doing things well in order to do them soon.
There is almost always a tension between these demands, and it is usually necessary to compromise somewhere along the line. In an ideal world prototyping would help to strike that balance.
In practice, customers (and salespeople) are often not big on the "throw one away" thing. If they see something that looks like it works, they tend to assume it actually is pretty close to working. And there can then be a fair amount of pressure to hack the rest of the functionality onto your bundle of chewing gum and string and deploy it to the world.
(Salespeople in particular tend to be pretty short-term thinkers.)
I can think of three things you can do to mitigate this sort of trouble:
Make sure that your prototypes are really hard to mistake for the real thing. Paper models are great for this, web page mockups and UI demos much less so. To the extent that circumstances allow, make things look as useless and unfinished as they really are.
If you have to produce "prototype" code that has any danger of seeing the light of day - even if it seems like a tiny chance - write it as well as you can. Imagine coming back to it in 18 months, when it's being used by 100,000 people.
Find yourself a better company to work for. This isn't so easy. Programmers can deride employers that operate like this, but to some extent they all do: a business needs income, which commonly means sales. Customers have pressures of their own and don't want to hear about yours. So even if you think you've found yourself some developer paradise where no-one would ever dream of letting a rough hack out the door, it still pays to at least consider points 1 and 2.