Perhaps you should take a step back and ask a few questions first.
- What is the most important part to have tested?
- How difficult is it to setup that test?
- Is the cost of setting up the test worth getting the test results?
- Can I have most of what I wanted tested with a simpler test?
Any way you go I would use a fixture based on live data and expectation of what that data becomes. This allows our test to be deterministic and therefore automated.
If the most important piece is a portion of logic, that can be tested via a unit test with known input/output and mocks.
If the testing the integration part is really the most important then I would try and strike a balance between mocking out as many moving pieces as I felt comfortable doing in order to make a more manageable test.
The more networked resources you use the more complex a system, and the more tests it should have. You have to think about timing issues, service uptime, timeouts, error states, etc. You can also fall into a trap of creating a test which is nondeterministic. If your assertion ends up looking for differences in timings, rely on particular timings, or rely on an unreliable service which breaks alot; than you may end up with a test which is worthless because of the amount of "noise" from false positive breaks.
If you want to drive towards using a continuous integration model you'll also need consider the complexities of having to manage (startup and shutdown) or multiple process with each test run. In general you get a easier test to manage if you can have the test be the single process running and the other "processes" be the function calls to the appropriate starting points in the code.