In an embedded application, we have a table describing the various address ranges that are valid on out target board. This table is used to setup the MMU.
The RAM address range is marked as cacheable, but other regions are marked at not cacheable. Why is that?
views:
318answers:
5Any memory region used for DMA or other hardware interactions should not be cached.
This is done so that the processor does not use stale values due to caching. When you access (regular) cached RAM, the processor can "remember" the value that you accessed. The next time you look at that same memory location, the processor will return the value it remembers without looking in RAM. This is caching.
If the content of the location can change without the processor knowing as could be the case if you have a memory mapped device (an FPGA returning some data packets for example), the processor could return the value is "remembered" from last time, which would be wrong.
To avoid this problem, you mark that address space as non-cacheable. This insures the processor does not try to remember the value.
If memory region is accessed by both hardware in software simultaneously (hardware configuration registers for example or scatter gather list for dma) this region must be defines as non cached. For actual dma buffer the memory can be define as cached and in most cases it advisable to be cached to allow the application level speedy access to that buffer. It's a driver responsibility to flush/invalidate cache before passing the buffer to dma/application.
Some areas like Flash can be read in one cycle, so do not need to be cached.