What is the difference between these 2 terms in Operating System: swap and page?
Just different terms for pretty much the same thing. They both refer to an area of virtual memory that is (usually) stored on the hard drive.
*nix, et al. call it "swap" Windows calls is a pagefile
In Linux, etc, swap space is generally a separate partition. On windows is is typically a file stored on the OS's filesystem, somewhere.
Swap in linux is a partition that is used for virtual memory. It contains pages which are blocks of memory that can be exchanged in and out of the real memory.
A page is a block of memory managed by the OS. On Linux you can find out the kernel allocation for your OS version by entering
$ getconf PAGESIZE 4096
4KB is a pretty common allocation.
While a page refers to a size allotment, swap refers to 'moving it around'. If you want to get the details, try looking at All about Linux swap space.
Swapping and paging are orthogonal concepts. With paging, the (physical) memory is divided into small blocks called "frames", and the (logical) memory of each program is divided into blocks called "pages". Pages and frames have the same size; each page is then mapped to a frame. This mapping is performed via page tables. Paging solves fragmentation problems that were present with earlier memory-management schemes.
With swapping, parts of memory which are not in use are written to disk; this enables one to run several programs whose total memory consumption is greater than the amount of physical memory. When a program makes a request for a part of memory that was written to the disk, that part has to be loaded into memory. To make room for it, another part has to be written to the disk (effectively the two parts swap places - hence the name). This "extension" of physical memory is generally known as "virtual memory".
Modern systems use both paging and swapping, and pages are what is being swapped in and out of memory.
See: Paging and swapping
The issue of swapping and paging is often misunderstood. Swapping and paging are two totally different things.
Swapping was the first technology used in Unix System V as physical memory fills up with processes there is a problem. What happens when the system runs completely out of RAM? It "grinds to a halt"!
The conservation and correct management of RAM is very important because the CPU can only work with data in RAM, after it has been loaded from the hard disk by the kernel. What happens when the mounting number and size of processes exceeds physical memory? To allow for the situation, and because only one process can ever execute at any one time (on a UniProcessor system), only really that process need to in RAM. However organising that would be extremely resource intensive, as multiple running processes are scheduled to execute on the processor very often (see the section called “Scheduler”)
To address these issues the kernel advertises an abstract memory use to applications by advertising a virtual address space to them that far exceeds physical memory. An application may just request more memory and the kernel may grant it.
A single process may have allocated 100mb of memory even though there may only be 64mb of RAM in the system. The process will not need to access the whole 100mb at the same time this is where virtual memory comes in. [...]
In spite of the historical interchanging of these two terms, they indicate different things. They are both methods for managing moving data in memory to another storage device, called a backing store (often a hard drive), but they use different methods of doing so.
Swapping involves the moving of a process's entire collection data in memory to a range of space on the backing store, often to a swapfile or swap partition. The process goes from being in memory to swapped out entirely; there is no in-between. Obviously the process will need to be entirely idle for swapping to be at all worthwhile. The advantage of this is that it is relatively simple to grasp and memory for a program is always allocated contiguously, the downside is that performance on a machine can become absolutely abysmal when the system ends up in a state where things are constantly swapping. The algorithm also involves the repeated swapping in and out of data that will not be used in the foreseeable future.
Paging attempts to solve these problem, by taking physical memory, and carving it up into things called "pages" of some fixed size. It also takes the memory space of each running process, and carves it up into these same sized pages; this is called the physical address space, due to the need to use physical addresses to access each block of memory.
Each program is presented an environment by the OS, and supported by modern hardware, which makes the programs memory footprint look like a single contiguous block of a very large amount of memory; this is called a logical address space.
However, each page of this contiguous block may be in memory, or it may be on the backing store. The operating system determines where each page is by consulting something called a "page table". If it finds the page the program has asked for is in memory somewhere, it will simply go to that page of memory and grab the data requested.
If it finds the page is not in memory; this causes a "page fault". The OS will suspend the process while it loads the requested page in from the backing store, and may in turn move another page from memory to the backing store to make room, based on some replacement algorithm. The backing store may be called a pagefile, or may still be called a swapfile or swap partition, leading to confusion about which system is being used. Whether it is a separate partition, or just a file, depends on the operating system.
There are certain parts of memory that aren't subject to being paged out. One of these is the paging code itself, and the parts of the kernel that handle things like page faults. Some operating systems, like MacOS, refer to this memory as "wired".
Modern day hardware has several devices that allow an operating system to support paging far more effectively. The most common of these is a Translation Lookaside Buffer, or TLB. This stores a sort of hardware page table cache, so that whenever a program needs to do a logical address to physical address translation, it doesn't have to go ask the operating system every time.
Modern operating systems also take advantage of paging by lazily-loading parts of the processes they are running. For instance, if you startup Microsoft Word, instead of loading the entire program into memory, the operating system will instead load only those parts of the program it needs into memory, and will grab the other parts of the program only as it needs them. This has trade-offs as well between memory footprint, boot speed, and how often delays occur within the program as new parts need to be loaded.
Anyway, maybe more than you are looking for, but hopefully interesting.
A shortcut explanation: OS memory manager swaps pages in memory. Pages are memory blocks. Swapping is writing them to disk to open up more memory and bringing them back whenever they are accessed again.