views:

926

answers:

4

I'm planning to create a data structure optimized to hold assembly code. That way I can be totally responsible for the optimization algorithms that will be working on this structure. If I can compile while running. It will be kind of dynamic execution. Is this possible? Have any one seen something like this?

Should I use structs to link the structure into a program flow. Are objects better?

struct asm_code {
   int type;
   int value;
   int optimized;
   asm_code *next_to_execute;
 } asm_imp;

Update: I think it will turn out, like a linked list.

Update: I know there are other compilers out there. But this is a military top secret project. So we can't trust any code. We have to do it all by ourselves.

Update: OK, I think I will just generate basic i386 machine code. But how do I jump into my memory blob when it is finished?

+5  A: 

It is possible. Dynamic code generation is even mainstream in some areas like software rendering and graphics. You find a lot of use in all kinds of script languages, in dynamic compilation of byte-code in machine code (.NET, Java, as far as I know Perl. Recently JavaScript joined the club as well).

You also find it used in very math-heavy applications as well, It makes a difference if you for example remove all multiplication with zero out of a matrix multiplication if you plan to do such a multiplication several thousand times.

I strongly suggest that you read on the SSA representation of code. That's a representation where each primitive is turned into the so called three operand form, and each variable is only assigned once (hence the same Static Single Assignment form).

You can run high order optimizations on such code, and it's straight forward to turn that code into executable code. You won't write that code-generation backend on a weekend though...

To get a feeling how the SSA looks like you can try out the LLVM compiler. On their web-site they have a little "Try Out" widget to play with. You paste C code into a window and you get something out that is close to the SSA form.

Little example how it looks like:

Lets take this integer square root algorithm in C. (arbitrary example, I just took something simple yet non-trivial):

unsigned int isqrt32 (unsigned int value) 
{
    unsigned int g = 0;
    unsigned int bshift = 15;
    unsigned int b = 1<<bshift;
    do {
     unsigned int temp = (g+g+b)<<bshift;
     if (value >= temp) {
      g += b;
      value -= temp;
     }
     b>>=1;
    } while (bshift--);
    return g;
}

LLVM turns it into:

define i32 @isqrt32(i32 %value) nounwind  {
entry:
    br label %bb

bb:  ; preds = %bb, %entry
    %indvar = phi i32 [ 0, %entry ], [ %indvar.next, %bb ]  
    %b.0 = phi i32 [ 32768, %entry ], [ %tmp23, %bb ]
    %g.1 = phi i32 [ 0, %entry ], [ %g.0, %bb ]  
    %value_addr.1 = phi i32 [ %value, %entry ], [ %value_addr.0, %bb ]  
    %bshift.0 = sub i32 15, %indvar 
    %tmp5 = shl i32 %g.1, 1 
    %tmp7 = add i32 %tmp5, %b.0 
    %tmp9 = shl i32 %tmp7, %bshift.0 
    %tmp12 = icmp ult i32 %value_addr.1, %tmp9  
    %tmp17 = select i1 %tmp12, i32 0, i32 %b.0  
    %g.0 = add i32 %tmp17, %g.1  
    %tmp20 = select i1 %tmp12, i32 0, i32 %tmp9  
    %value_addr.0 = sub i32 %value_addr.1, %tmp20   
    %tmp23 = lshr i32 %b.0, 1  
    %indvar.next = add i32 %indvar, 1  
    %exitcond = icmp eq i32 %indvar.next, 16 
    br i1 %exitcond, label %bb30, label %bb

bb30:    ; preds = %bb
    ret i32 %g.0
}

I know it looks horrible at first. It's not even pure SSA-Form. The more you read on that representation the more sense it will make. And you will also find out why this representation is so widely used these days.

Encapsulating all the info you need into a data-structure is easy. In the end you have to decide if you want to use enums or strings for opcode names ect.

Btw - I know I didn't gave you a data-structure but more a formal yet practical language and the advice where to look further.

It's a very nice and interesting research field.

Edit: And before I forget it: Don't overlook the built in features of .NET and Java. These languates allow you to compile from byte-code or source code from within the program and execute the result.

Cheers, Nils


Regarding your edit: How to execute a binary blob with code:

Jumping into your binary blob is OS and platform dependent. In a nutshell you have invalide the instruction cache, maybe you have to writeback the data-cache and you may have to enable execution rights on the memory-region you've wrote your code into.

On win32 it's relative easy as instruction cache flushing seems to be sufficient if you place your code on the heap.

You can use this stub to get started:

typedef void (* voidfunc) (void);

void * generate_code (void)
{
    // reserve some space
    unsigned char * buffer = (unsigned char *) malloc (1024);


    // write a single RET-instruction
    buffer[0] = 0xc3; 

    return buffer;
}

int main (int argc, char **args)
{
    // generate some code:
    voidfunc func = (voidfunc) generate_code();

    // flush instruction cache:
    FlushInstructionCache(GetCurrentProcess(), func, 1024);

    // execute the code (it does nothing atm)
    func();

    // free memory and exit.
    free (func);
}
Nils Pipenbrinck
No problem - It's one of my favorite topics, so the answer turned out to be more a rant than anything. :-)
Nils Pipenbrinck
I want to find the core problem with representation of asm code in memory. I think it could be the next big thing. What if we could just jump over the link process. That would be wonderful. :) I hate the wait time.
Flinkman
A: 

In 99% of the cases, the difference in performance is negligible. The main advantage of classes is that the code produced by OOP is better and easier to understand than procedural code.

I'm not sure in what language you're coding - note that in C# the major difference between classes and structs is that structs are value types while classes are reference types. In this case, you might want to start with structs, but still add behavior (constructor, methods) to them.

ripper234
A: 

Not discussing the technical value of optimize yourself your code, in a C++ code, choosing between a POD struct or a full object is mostly a point of encapsulation.

Inlining the code will let the compiler optimize (or not) the constructors/accessors used. There will be no loss of performance.

First, set a constructor

If you're working with a C++ compiler, create at least one constructor:

struct asm_code {
   asm_code()
      : type(0), value(0), optimized(0) {}

   asm_code(int type_, int value_, int optimized_)
      : type(type_), value(value_), optimized(_optimized) {}

   int type;
   int value;
   int optimized;
 };

At least, you won't have undefined structs in your code.

Are every combination of data possible?

Using a struct like you use means that any type is possible, with any value, and any optimized. For example, if I set type = 25, value = 1205 and optimized = -500, then it is Ok.

If you don't want the user to put random values inside your structure, add inline accessors:

struct asm_code {

   int getType() { return type ; }
   void setType(int type_) { VERIFY_TYPE(type_) ; type = type_ ; }

   // Etc.

   private :
      int type;
      int value;
      int optimized;
 };

This will let you control what is set inside your structure, and debug your code more easily (or even do runtime verification of you code)

paercebal
Thanks, this I have to process. I hope it shows that I want to compile it my self. It's a compiler in a program thing.
Flinkman
Thank you for your question. Now I realize I maybe need.. wait. *thinking* nop...
Flinkman
A: 

I assume you want a data structure to hold some kind of instruction template, probably parsed from existing machine code, similar to:

add r1, r2, <int>

You will have an array of this structure and you will perform some optimization on this array, probably changing its size or building a new one, and generate corresponding machine code.

If your target machine uses variable width instructions (x86 for example), you can't determine your array size without actually finishing parsing the instructions which you build the array from. Also you can't determine exactly how much buffer you need before actually generating all the instructions from optimized array. You can make a good estimate though.

Check out GNU Lightning. It may be useful to you.

artificialidiot