tags:

views:

177

answers:

3

What is difference between our usual pointers(ones which we normally use), near pointers and far pointers and is there a practical usage for near and far pointers in present day C/C++ systems? Any practical scenario which necessiates use of these specific pointers and not other c,c++ semantics will be very helpful.

+9  A: 

The near and far keywords have its origin in the segmented memory model that Intel had before. The near pointers could only access a block of memory originally around 64Kb in size called a segment whereas the far pointers could go outside of that range consisting of a segment and offset in the that segment. The near pointers were much faster than far pointers so therefore in some contexts it paid off to use them.

Nowadays with virtual memory near and far pointers have no use.

EDIT:Sorry if I am not using the correct terms, but this is how I remembered it when I was working with it back in the day :-)

Anders K.
Nowadays, "near" pointers are still useful -- in fact, that's the kind of pointer that we just call a "pointer" now. It's still possible to create a far pointer, but it's nearly useless in most 32- or 64-bit OSes.
cHao
@cHao: Actually, no. In C and C++ they are called __pointers, and only pointers__. It was but one processor architecture that required compilers tailored for it to introduce non-standard extensions.
sbi
@sbi: They were common enough to be a de-facto standard, ISO and ANSI be damned. Thank gawd they're gone, but while they were around, every useful C and C++ compiler in the x86 world (read: one of the most common and most important architectures in existence) had to have them. That they weren't in the ISO/ANSI standards doesn't make them any less important, or any less "standard" in the real world.
cHao
@cHao: I disagree. Every useful compiler back then had a "huge" memory model, and if you compiled with that, the `far` keyword was completely unnecessary and you could basically just write sane C code and pretend you had a halfway-usable amount of linear memory.
R..
@R.. Except that doing so would lead to pointers being a lot slower, as (1) they were far pointers anyway, whether you needed one or not, and (2) there was always some adjustment going on to preserve that illusion of a huge chunk of linear memory. I'd personally think the "huge" memory model to be the last of last resorts for anyone who really had to care about performance (which, back then, was just about everyone). And either way, the `far` keyword was there, and any 16-bit compiler that didn't have it was 'substandard'.
cHao
And I'd think writing `far` in code would be the last of last resorts for anyone who put any worth on portable code, which sadly was the vast minority of x86 coders...
R..
Portability isn't worth sacrificing that kind of performance, especially when you're writing code *for a PC*. Almost every PC on earth runs x86. There simply weren't other platforms worth caring about at the time, and even today it's still a stretch (though 32-bit x86 CPUs leveled the playing field, making segments less blatant and almost useless). If your program is for DOS and/or Win3.x and/or some DOS extender and/or whatever other 16-bit environment on x86, and you don't take that platform into account, your code will be slow as balls.
cHao
+1  A: 

Anders has the gist, but for more detail Wikipedia has a pretty good article on this: http://en.wikipedia.org/wiki/Intel_Memory_Model

jefflub
A: 

During my graduation we used far pointers to gain direct access to the video memory. That was much faster than using print functions to show something on screen.

kist