I am porting a app/PCI driver from vxWorks to Linux and I would like to keep the same architecture if possible. The current driver has 2 tasks(threads) that communicate with each other using message queues. Is there a mechanism to communicate between kernel threads? The message queues are being used to pass buffer addresses and size info so the tasks can use DMA to move large amounts of data.
+1
A:
It sounds like the workqueue
interface might be what you're after - or for something lighter-weight, a kfifo
combined with a rwsem
semaphore.
caf
2009-09-18 07:20:18
thanks caf. I will research these two mechanisms today and let you know what works best for me.
CVAUGHN
2009-09-18 13:30:56
CVAUGHN
2009-09-18 14:29:00
+1
A:
I would strongly advise against keeping the VxWorks architecture on Linux. Kernel thread proliferation is frowned upon, your code will never make it into official kernel tree. Even if you don't care about that, are you 100% sure that you want to develop a driver in a non-standard way ? Things would be much simpler if you would just get rid of these two tasks. BTW, why on earth you need tasks for PCI driver to begin with ?
Demiurg
2009-10-19 08:31:44
Thanks for the feedback. This was another case of management giving an almost impossible task with and even more impossible schedule. The battle was won and we aren't doing it this way anymore. The need for tasks were based on the original architecture. The driver is moving HUGE amounts of data off a PCI card. The data was DMA'd to a circular buffer then a message was sent to the appropriate task to handle the data so that the next DMA could happen. In vxWorks it makes since, in Linux it doesn't but they didn't want to change it.
CVAUGHN
2009-11-01 11:18:27
Now if you absolutely have to you can keep the some monolithic architecture on Linux as well as VxWorks using http://femtolinux.com - it allows to run user applications in kernel mode, i.e. in much the same way as VxWorks
Demiurg
2010-07-23 08:31:31