I am currently building a Neural Network library. I have constructed it as an object graph for simplicity. I am wondering if anyone can quantify the performance benefits of going to an array based approach. What I have now works very good for building networks of close to arbitrary complexity. Regular (backpropped) networks as well as recurrent networks are supported. I am considering having trained networks "compile" into some "simpler" form such as arrays.

I just wanted to see if anyone out there had any practical advice or experience building neural networks that deployed well into production. Is there any benefit to having the final product be array based instead of object graph based?

P.S Memory footprint is less important than speed.

+1  A: 

It's been a while, but I recall that speed is usually only an issue during training of the Neural Network.

Mitch Wheat
+1  A: 

This all depends on what language you are using - I assume you are using a C derivative.

In my implementations I've found the object graph approach far superior. There is some tradeoff in speed, but the ease of maintenance outweighs the object lookup calls. This all depends on whether you're looking for training speed or solving speed as well... I'm assuming you are most worried about training speed?

You can always end up micro-optimizing some of the object call issues if need be.

Considering your secondary motive of sub-netting the networks, I think it's even more important to be object based - it makes it much easier to take out portions of the work.

Dan Keen

I don't have any personal experience writing such a library, but I can link you to some popular open-source projects which you could perhaps learn from. (Personally I would just use one of these existing libraries.)

+1  A: 

People have started using GPGPU techniques in AI, and having your neural net in matrix form could leverage the much faster matrix ops in your typical graphics card.

That I have considered however our cluster currently does not have any GPUs. I plan to implement that in the future to convince people that we need some GPUs in our clusters
+1  A: 
haha. I saw that this morning. We are extensive users of GAs :-)