You would have to find an algorithm that cuts off more than one unwanted permutation after a single check, in order to gain anything. The obvious strategy is to build the permutations sequentially, for example, in a tree. Each cut then eliminates a whole branch.
edit:
Example: in the set (A B C D), let's say that B and C, and A and D are not allowed to be neighbours.
(A) (B) (C) (D)
/ | \ / | \ / | \ / | \
AB AC AD BA BC BD CA CB CD DA DB DC
| \ | \ X / \ X / \ / \ X / \ X / \ / \
ABC ABD ACB ACD BAC BAD BDA BDC CAB CAD CDA CDB DBA DBC DCA DCB
X | X | | X X | | X X | | X | X
ABDC ACDB BACD BDCA CABD CDBA DBAC DCAB
v v v v v v v v
Each of the strings without parentheses needs a check. As you see, the Xs (where subtrees have been cut off) save checks, one if they are in the third row, but four if they are in the second row. We saved 24 of 60 checks here and got down to 36. However, there are only 24 permutations overall anyway, so if checking the restrictions (as opposed to building the lists) is the bottleneck, we would have been better off to just construct all the permutations and check them at the end... IF the checks couldn't be optimized when we go this way.
Now, as you see, the checks only need to be performed on the new part of each list. This makes the checks much leaner; actually, we divide the check that would be needed for a full permutation into small chunks. In the above example, we only have to look whether the added letter is allowed to stand besides the last one, not all the letters before.
However, also if we first construct, then filter, the checks could be cut short as soon as a no-no is encountered. So, on checking, there is no real gain compared to the first-build-then-filter algorithm; there is rather the danger of further overhead through more function calls.
What we do save is the time to build the lists, and the peak memory consumption. Building a list is generally rather fast, but peak memory consumption might be a consideration if the number of object gets larger. For the first-build-then-filter, both grow linearly with the number of objects. For the tree version, it grows slower, depending on the constraints. From a certain number of objects and rules on, there is also actual check saving.
In general, I think that you would need to try it out and time the two algorithms. If you really have only 5 objects, stick to the simple (filter rules (build-permutations set))
. If your number of objects gets large, the tree algorithm will at some point perform noticably better (you know, big O).
Um. Sorry, I got into lecture mode; bear with me.