views:

269

answers:

2

I am porting a app/PCI driver from vxWorks to Linux and I would like to keep the same architecture if possible. The current driver has 2 tasks(threads) that communicate with each other using message queues. Is there a mechanism to communicate between kernel threads? The message queues are being used to pass buffer addresses and size info so the tasks can use DMA to move large amounts of data.

+1  A: 

It sounds like the workqueue interface might be what you're after - or for something lighter-weight, a kfifo combined with a rwsem semaphore.

caf
thanks caf. I will research these two mechanisms today and let you know what works best for me.
CVAUGHN
CVAUGHN
+1  A: 

I would strongly advise against keeping the VxWorks architecture on Linux. Kernel thread proliferation is frowned upon, your code will never make it into official kernel tree. Even if you don't care about that, are you 100% sure that you want to develop a driver in a non-standard way ? Things would be much simpler if you would just get rid of these two tasks. BTW, why on earth you need tasks for PCI driver to begin with ?

Demiurg
+1 for sanity and common sense.
Tim Post
Thanks for the feedback. This was another case of management giving an almost impossible task with and even more impossible schedule. The battle was won and we aren't doing it this way anymore. The need for tasks were based on the original architecture. The driver is moving HUGE amounts of data off a PCI card. The data was DMA'd to a circular buffer then a message was sent to the appropriate task to handle the data so that the next DMA could happen. In vxWorks it makes since, in Linux it doesn't but they didn't want to change it.
CVAUGHN
Now if you absolutely have to you can keep the some monolithic architecture on Linux as well as VxWorks using http://femtolinux.com - it allows to run user applications in kernel mode, i.e. in much the same way as VxWorks
Demiurg