views:

648

answers:

3

I am missing something when it comes to understanding the need for highmem to address more than 1GB of RAM. Could someone point out where I go wrong? Thanks!

What I know:

  • 1 GB of a processes' virtual memory (high memory region) is reserved for kernel operations. The user space can use the remaining 3 GB. This is the 3/1 split.

  • The virtual memory features of the VM map the (continuous) virtual memory pages to physical pages (RAM).

What I don't know:

  • What operations use the kernel virtual memory? I suppose things like kmalloc(...) in kernel-space would use kernel virtual memory.

  • I would think that 4GB of RAM could be used under this scheme. I don't get why the kernel 1 GB virtual space is the limiting factor when addressing physical space. This is where my understanding breaks down. Please advise.

I've been reading this (http://kerneltrap.org/node/2450), which is great. But it doesn't quite address my question to my liking.

Thanks for your input, Dave

A: 
  1. For example the system calls use the kernel space.
  2. You can have 64GB of physical ram, but on 32-bit platforms processors can access only 4gb because of the 32-bit virtual addressing. Actually, you can have 1GB of RAM and 3GB of swap and virtual addressing will make it look like you have 4GB. On 64-bit platforms virtual addressing is practically unlimited.
Alexandru
Anacrolix: what?
Alexandru
1GB of ram and 3GB of swap? That doesn't make any sense... on a 32-bit machine there's no requirement to have swap at all to work up to the addressing limit of your architecture.
Steven Schlansker
I was an example.
Alexandru
+2  A: 

Mapping 1 GB to kernel in each process allows processes to switch to kernel mode without also performing a context switch. Responses to system calls such as read(), mmap() and others can then be appropriately processed in the calling process' address space.

If space for the kernel were not reserved in each process, switching to "kernel mode" in between executing user space code would be more expensive, and be unable to use virtual address mapping through the hardware MMU (memory management unit) for the system calls being serviced.

Systems running a 32bit kernel with more than 1GB of physical memory, are able to assign physical memory locations in ZONE_HIGHMEM (roughly above the 1GB mark), which can require the kernel to jump through hoops for certain operations to interact with them. The addition of PAE (physical address extension), extends this problem by allowing upto 64GB of physical memory, decreasing the ratio of memory within the 1GB physical address memory, to regions allocated in ZONE_HIGHMEM.

Matt Joiner
Yes, the reasons for having kernel virtual memory are clear. But in the pre-HIGHMEM days, why is the kernel virtual memory space the limiting factor in physical memory addressing?
Dave
+4  A: 

The reason that kernel virtual space is a limiting factor on useable physical memory is because the kernel needs access to all physical memory, and the way it accesses physical memory is through kernel virtual addresses. The kernel doesn't use special instructions that allow direct access to physical memory locations - it has to set up page table entries for any physical ranges that it wants to talk to.

In the "old style" scheme, the kernel set things up so that every process's page tables mapped virtual addresses from 0xC0000000 to 0xFFFFFFFF directly to physical addresses from 0x00000000 to 0x3FFFFFFF (these pages were marked so that they were only accessible in ring 0 - kernel mode). These are the "kernel virtual addresses". Under this scheme, the kernel could directly read and write any physical memory location without having to fiddle with the MMU to change the mappings.

Under the HIGHMEM scheme, the mappings from kernel virtual addresses to physical addresses aren't fixed - parts of physical memory are mapped in and out of the kernel virtual address space as the kernel needs access to that memory. This allows more physical memory to be used, but at the cost of having to constantly change the virtual-to-physical mappings, which is quite an expensive operation.

caf
So under the "old-scheme", any processes in kernel mode could address all of the physical memory (including memory not associated with that particular process)?
Dave
Yes, absolutely (and they still can - they just have to jump through hoops to do so sometimes). This is needed - for example, interrupts are serviced in the context of whatever process was running at the time the interrupt happened. The kernel is a little like a big shared library, except that its code runs at an elevated privilege level (and you can't just call into it - you need to go through a special entry point that raises the privilege level).
caf
So under the "old-scheme" when I do a kmalloc(...), it allocates kernel virtual memory, which really translates to physical memory--which won't ever get swapped out to disk?
Dave