From your description, this is something else. Both chain of responsibility and pipeline deal with essentially serial processing. At least if I understand your description correctly, what you have is basically a number of "processor elements" working on the data in parallel.
Normally, you'd handle a situation like that with a set of observers, but your description doesn't really fit the observer pattern either. In particular, each of your processor elements appears to know about (at least) one other processor element. With the observer pattern, the observers are normally oblivious to each other -- each registers itself with the data source, and when there's new/changed data, all the observers are notified by the data source.
My immediate reaction would be that you'd probably be better off using the observer pattern instead of looking for a name for what you've done. One of the points of patterns is to solve similar problems in similar ways. From the sound of things, this would probably be a bit more versatile and manageable. For example, if you decide to eliminate one observer from the chain, you'll apparently have to modify a different observer to do so. With the normal observer pattern, you can add or remove observers without changing any others (and without the others even being aware anything has changed at all).
Edit: Given a mixture of independent and chained elements, I see two possible variants. The first (and probably cleanest) is to use the observer pattern at the top level, and some of the observers will themselves be pipelines.
The other possibility would be to steal a trick from VLIW processors, and at the top level have a flag indicating whether a particular element depends on the result from the previous one or not. That makes it pretty easy to mix pipelines with independent observers, and if (for example) some time you care about doing parallel processing, makes it pretty easy to execute independent processes in parallel, while maintaining serial execution for those that need it.