views:

3283

answers:

7

Dear all, Here are my questions...

1- Do realloc and memcpy copy the entries in an array to another in a way faster than just iterating on each element O(N) ? If the answer is yes then what do you think is its complexity ?

2- If the size allocated is smaller than the original size, does realloc copy the entries to somewhere else or just leave them as they are decreasing the size of the array ?

Thanks so much.

+3  A: 
  1. There is absolutely no way to copy N items faster than O(N). However, it might be able to copy multiple items at once, or use special processor instructions - so it still might be faster than you could do it yourself.
  2. I don't know for sure, but I'd assume that the memory is completely reallocated. That's the safest assumption, and it's probably implementation dependent anyway.
Mark Ransom
+3  A: 
  1. The performance of memcpy can't really be better than O(N) but it can be optimized so that it outperforms manual copying; for example, it might be able to copy 4 bytes in the time it takes you to copy 1 byte. Many memcpy implementations are written in assembly using optimized instructions that can copy multiple elements at a time which is usually faster than copying data one byte at a time.

  2. I don't quite understand this question, if you use realloc to decrease the size of memory and it succeeds (returns non-NULL), the new location will contain the same data as the old location up to the size of the new request. If the memory location was changed as a result of calling realloc (not usual when decreasing the size) the contents will be copied, otherwise no copying needs to happen as the memory hasn't moved.

Robert Gamble
+11  A: 

1 - No. They copy a block at a time. See http://www.embedded.com/columns/technicalinsights/19205567?_requestid=93566 for a pretty good analysis.

2 - This is implementation dependent. See http://www.gnu.org/software/libtool/manual/libc/Changing-Block-Size.html for glibc details. "In several allocation implementations, making a block smaller sometimes necessitates copying it"

Sean
+1  A: 
  1. It can be conjectured that memcpy could be written such that it would move large number of bits around. e.g. It's entirely possible to copy the data using SSE instructions, if it is advantageous.

As other said, it won't be faster than O(n), but memory systems often have a preferred block size, and also it's possible to, say, write the size of a cache line at a time.

Calyth
A: 

Presuming you are talking about glibc, and since your questions are implementation dependent, it's probably best just to check the source:

malloc.c

memcpy.c

The way I read it, the answers would be:

  1. O(N) --- there is no way to copy items in better than linear time.
  2. Occasionally large items will be copied when realloc() is used to shrink them.
Nathan Kurz
+1  A: 
Charlie Martin
A: 

The x86 has special instructions for scanning and matching a byte/word in a block of memory as well and one that can be used to copy a block of memory (it is a CISC cpu after all). A lot of C compilers that implement inline assembly language and a pragma to do inlining of entire functions have for many many years taken advantage of this in their library functions.

The ones used for mem copy are movsb/movsw in combination to the rep instruction.

CMPS/MOVS/SCAS/STOS
REP, REPE, REPNE, REPNZ, REPZ

Setup registers with src/trg addresses and int count and away you go.

RogerV