Can anyone explain me these pointers with a suitable example ... and when these pointers are used?
Homework, but look here: http://wiki.answers.com/Q/What_are_near_far_and_huge_pointers_in_C
In the old days, according to the Turbo C manual, a near pointer was merely 16 bits when your entire code and data fit in the one segment. A far pointer was composed of a segment as well as an offset but no normalisation was performed. And a huge pointer was automatically normalised. Two far pointers could conceivably point to the same location in memory but be different whereas the normalised huge pointers pointing to the same memory location would always be equal.
This terminology was used in 16 bit architectures.
In 16 bit systems, data was partitioned into 64Kb segments. Each loadable module (program file, dynamically loaded library etc) had an associated data segment - which could store up to 64Kb of data only.
A NEAR pointer was a pointer with 16 bit storage, and referred to data (only) in the current modules data segment.
16bit programs that had more than 64Kb of data as a requirement could access special allocators that would return a FAR pointer - which was a data segment id in the upper 16 bits, and a pointer into that data segment, in the lower 16 bits.
Yet larger programs would want to deal with more than 64Kb of contiguous data. A HUGE pointer looks exactly like a far pointer - it has 32bit storage - but the allocator has taken care to arrange a range of data segments, with consecutive IDs, so that by simply incrementing the data segment selector the next 64Kb chunk of data can be reached.
The underlying C and C++ language standards never really recognized these concepts officially in their memory models - all pointers in a C or C++ program are supposed to be the same size. So the NEAR, FAR and HUGE attributes were extensions provided by the various compiler vendors.
All of the stuff in this answer is relevant only to the old 8086 and 80286 segmented memory model.
near: a 16 bit pointer that can address any byte in a 64k segment
far: a 32 bit pointer that contains a segment and an offset. Note that because segments can overlap, two different far pointers can point to the same address.
huge: a 32 bit pointer in which the segment is "normalised" so that no two far pointers point to the same address unless they have the same value.
tee: a drink with jam and bread.
That will bring us back to doh oh oh oh
and when these pointers are used?
in the 1980's and 90' until 32 bit Windows became ubiquitous,
The primary example is the Intel X86 architecture.
The Intel 8086 was, internally, a 16-bit processor: all of its registers were 16 bits wide. However, the address bus was 20 bits wide (1 MiB). This meant that you couldn't hold an entire address in a register, limiting you to the first 64 kiB.
Intel's solution was to create 16-bit "segment registers" whose contents would be shifted left four bits and added to the address. For example:
DS ("Data Segment") register: 1234 h
DX ("D eXtended") register: + 5678h
------
Actual address read: 179B8h
This created the concept of 64 kiB segment. Thus a "near" pointer would just be the contents of the DX register (5678h), and would be invalid unless the DS register was already set correctly, while a "far" pointer was 32 bits (12345678h, DS followed by DX) and would always work (but was slower since you had to load two registers and then restore the DS register when done).
However, note that you could have two "far" pointers that are different values but point to the same address. For example, the far pointer 100079B8h points to the same place as 12345678h. Thus, pointer comparison on far pointers was an invalid operation: the pointers could differ, but still point to the same place.
This was where I decided that Macs (with Motorola 68000 processors at the time) weren't so bad after all, so I missed out on huge pointers. IIRC, they were just far pointers that guaranteed that all the overlapping bits in the segment registers were 0's, as in the second example.
Motorola didn't have this problem with their 6800 series of processors, since they were limited to 64 kiB, When they created the 68000 architecture, they went straight to 32 bit registers, and thus never had need for near, far, or huge pointers. (Instead, their problem was that only the bottom 24 bits of the address actually mattered, so some programmers (notoriously Apple) would use the high 8 bits as "pointer flags", causing problems when address buses expanded to 32 bits (4 GiB).)
Linus Torvalds just held out until the 80386, which offered a "protected mode" where the addresses were 32 bits, and the segment registers were the high half of the address, and no addition was needed, and wrote Linux from the outset to use protected mode only, no weird segment stuff, and that's why you don't have near and far pointer support in Linux (and why no company designing a new architecture will ever go back to them if they want Linux support). And they ate Robin's minstrels, and there was much rejoicing. (Yay...)
In some architectures, a pointer which can point to every object in the system will be larger and slower to work with than one which can point to a useful subset of things. Many people have given answers related to the 16-bit x86 architecture. Various types of pointers were common on 16-bit systems, though near/fear distinctions could reappear in 64-bit systems, depending upon how they're implemented (I wouldn't be surprised if many development systems go to 64-bit pointers for everything, despite the fact that in many cases that will be very wasteful).
In many programs, it's pretty easy to subdivide memory usage into two categories: small things which together total up to a fairly small amount of stuff (64K or 4GB) but will be accessed often, and larger things which may total up to much larger quantity, but which need not be accessed so often. When an application needs to work with part of an object in the "large things" area, it copies that part to the "small things" area, works with it, and if necessary writes it back.
Some programmers gripe at having to distinguish between "near" and "far" memory, but in many cases making such distinctions can allow compilers to produce much better code.
(note: Even on many 32-bit systems, certain areas of memory can be accessed directly without extra instructions, while other areas cannot. If, for example, on a 68000 or an ARM, one keeps a register pointing at global variable storage, it will be possible to directly load any variable within the first 32K (68000) or 2K (ARM) of that register. Fetching a variable stored elsewhere will require an extra instruction to compute the address. Placing more frequently-used variables in the preferred regions and letting the compiler know would allow for more efficient code generation.