views:

72

answers:

3

I was asked this question in an interview Plz tell me the answer :-

You have no documentation of the kernel. You only knows that you kernel supports paging. How will you find that page size ? There is no flag or macro you have that can tell you about page size.

I was given the hint as you can use Time to get the answer. I still have no clue for it.

A: 

It looks to me like a question about 'how does paging actually work' They want you to explain the impact that changing the page size will have on the execution of the system.

I am a bit rusty on this stuff, but when a page is full, the system starts page swapping, which slows everything down. So you want to run something that will fill up the memory to different sizes, and measure the time it takes to do a task. At some point there will be a jump, where the time taken to do the task will suddenly jump.

Like I said I am a bit rusty on the implementation of doing this. But i'm pretty sure that is the shape of the answer they were after.

Jon
At its most basic paging has nothing to do with swapping. It's about splitting up physical memory into fixed size chunks so that each chunk can have its own attributes (e.g. writable, executable) and they can be re-arranged non-contiguously in virtual memory (as opposed to physical memory). We can then take advantage of page faults to implement swapping.
wj32
Note that the system starts swapping when physical RAM is full (or earlier), not when a single page is.
Borealid
+1  A: 

Run code like the following:

for (int stride = 1; stride < maxpossiblepagesize; stride += searchgranularity) {
    char* somemem = (char*)malloc(veryverybigsize*stride);
    starttime = getcurrentveryaccuratetime();
    for (pos = somemem; pos < somemem+veryverybigsize*stride; pos += stride) {
        // iterate over "veryverybigsize" chunks of size "stride"
        *pos = 'Q'; // Just write something to force the page back into physical memory
    }
    endtime = getcurrentveryaccuratetime();
    printf("stride %u, runtime %u", stride, endtime-starttime);
}

Graph the results with stride on the X axis and runtime on the Y axis. There should be a point at stride=pagesize, where the performance no longer drops.

This works by incurring a number of page faults. Once stride surpasses pagesize, the number of faults ceases to increase, so the program's performance no longer degrades noticeably.

If you want to be cleverer, you could exploit the fact that the mprotect system call must work on whole pages. Try it with something smaller, and you'll get an error. I'm sure there are other "holes" like that, too - but the code above will work on any system which supports paging and where disk access is much more expensive than RAM access. That would be every seminormal modern system.

Borealid
mprotect is a good point for Unix systems but it is not generic.@Borealid@JonI have given the similar answer as you guys have explained here Based on page fault and performance basis. Even I tried to relate the read/write operations and time as wellin terms of block written but it looks like those guys didn't impressed. :(
Arpit
A: 

Whatever answer they were expecting it would almost certainly be a brittle solution. For one thing you can have multiple pages sizes so any answer you may have gotten for one small allocation may be irrelevant for the next multi-megabyte allocation (see things like Linux's Large Page support).

I suspect the question was more aimed at seeing how you approached the problem rather than the final solution you came up with.

By the way this question isn't about linux because you do have documentation for that as well as POSIX compliance, for which you just call sysconf(_SC_PAGE_SIZE).

stsquad
Stsquad May be you are right.
Arpit