What is the maximum size of buffers memcpy and other functions can handle? Is this implementation dependent? Is this restricted by the size(size_t) passed in as an argument?
This is entirely implementation dependent.
This depends on the hardware as much as anything, but also on the age of the compiler. For anyone with a reasonably modern compiler (meaning anything based on a standard from the early 90's or later), the size argument is a size_t
. This can reasonably be the largest 16 bit unsigned, the largest 32 bit unsigned, or the largest 64 bit unsigned, depending on the memory model the compiler compiles to. In this case, you just have to find out what size a size_t
is in your implementation. However, for very old compilers (that is, before ANSI-C and perhaps for some early versions of ANSI C), all bets are off.
On the standards side, looking at cygwin and Solaris 7, for example, the size argument is a size_t
. Looking at an embedded system that I have available, the size argument is an unsigned
(meaning 16-bit unsigned). (The compiler for this embedded system was written in the 80's.) I found a web reference to some ANSI C where the size parameter is an int
.
You may want to see this article on size_t
as well as the follow-up article about a mis-feature of some early GCC versions where size_t
was erroneously signed.
In summary, for almost everyone, size_t
will be the correct reference to use. For those few using embedded systems or legacy systems with very old compilers, however, you need to check your man page.
Implementation dependent, but you can look in the header (.h) file that you need to include before you can use memcpy. The declaration will tell you (look for size_t or other).
And then you ask what size_t is, well, that's the implementation dependent part.
Functions normally use a size_t
to pass a size as parameter. I say normally because fgets()
uses an int
parameter, which in my opinion is a flaw in the C standard.
size_t
is defined as a type which can contain the size (in bytes) of any object you could access. Generally it's a typedef of unsigned int
or unsigned long
.
That's why the values returnes by the sizeof
operator are of size_t
type.
So 2 ** (sizeof(size_t)
* CHAR_BIT
) gives you a maximum amount of memory that your program could handle, but it's certainly not the most precise one.
(CHAR_BIT
is defined in limits.h
and yields the number of bits contained in a char
).
Right, you cannot copy areas that are greater then 2^(sizeof(size_t)*8) bytes. But that is nothing to worry about, because you cannot allocate more space either, because malloc
also takes the size as a size_t parameter.
There is also an issue related to what size_t
can represent verses what your platform will allow a process to actually address.
Even with virtual memory on a 64-bit platform, you are unlikely to be able to call memcpy()
with sizes of more than a few TB or so this week, and even then that is a pretty hot machine.... it is hard to imagine what a machine on which it would be possible to install a fully covered 64-bit address space would look like.
Never mind the embedded systems with only a few KB of total writable memory, where it can't make sense to attempt to memcpy()
more information than the RAM regardless of the definition of size_t
. Do think about what just happened to the stack holding the return address from that call if you did?
Or systems where the virtual address space seen by a process is smaller than the physical memory installed. This is actually the case with a Win32 process running on a Win64 platform, for example. (I first encountered this under the time sharing OS TSX-11 running on a PDP-11 with 4MB of physical memory, and 64KB virtual address in each process. 4MB of RAM was a lot of memory then, and the IBM PC didn't exist yet.)