If you are running on a typical embedded processor running Linux without virtual memory it is quite likely your process will be terminated by the operating system before new fails if you allocate too much memory.
If you are running your program on a machine with less physical memory than the maximum of virtual memory (2 GB on standard Windows) you will find that once you have allocated an amount of memory approximately equal to the available physical memory, further allocations will succeed but will cause paging to disk. This will bog your program down and you might not actually be able to get to the point of exhausting virtual memory. So you might not get an exception thrown.
If you have more physical memory than the virtual memory, and you simply keep allocating memory, you will get an exception when you have exhausted virtual memory to the point where you can not allocate the block size you are requesting.
If you have a long-running program that allocates and frees in many different block sizes, including small blocks, with a wide variety of lifetimes, the virtual memory may become fragmented to the point where new will be unable to find a large enough block to satisfy a request. Then new will throw an exception. If you happen to have a memory leak that leaks the occasional small block in a random location that will eventually fragment memory to the point where an arbitrarily small block allocation will fail, and an exception will be thrown.
If you have a program error that accidentally passes a huge array size to new[], new will fail and throw an exception. This can happen for example if the array size is is actually some sort of random byte pattern, perhaps derived from uninitialized memory or a corrupted communication stream.
All the above is for the default global new. Howver you can replace global new and you can provide class-specific new. These too can throw, and the meaning of that situation depends on how you programmed it. it is usual for new to include a loop that attempts all possible avenues for getting the requested memory. It throws when all those are exhausted. What you do then is up to you.
You can catch an exception from new and use the opportunity it provides to document the program state around the time of the exception. You can "dump core". If you have a circular instrumentation buffer allocated at program startup, you can dump it to disk before you teminate the program. The program termination can be graceful, which is an advantage over simply not handling the exception.
I have not personally seen an example where additional memory could be obtained after the exception. One possibility however is the following. Suppose you have a memory allocator that is highly efficient but not good at reclaiming free space. For example, it might be prone to free space fragmentation, in which free blocks are adjacent but not coalesced. You could use an exception from new, caught in a new_handler, to run a compaction procedure for free space before retrying.
Serious programs should treat memory as a potentially scarce resource, control its allocation as much as possible, monitor its availability and react appropriately if something seems to have gone dramatically wrong. For example, you could make a case that in any real program there is quite a small upper bound on the size parameer passed to the memory allocator, and anything larger than this should cause some kind of error handling, whether or not the request can be satisfied. You could argue that the rate of memory increase of a long-running program should be monitored, and if it can be reasonably predicted that the program will exhaust available memory in the near future, an orderly restart of the process should be begun.