I've been in both QA and Dev. The process isn't really that much different in either world because it boils down to a simple thing: All estimates are guesses. They're based on experience, hunches, and an assessment of the complexity and risk of a particular set of problems, but they are, at best, good guesses.
You can make them more useful by analyzing the set of known tasks around particular feature areas. In QA, that means looking at the problem from the angles you have available: analyze the variations in any possible user story, analyze the possible inputs if you have a mockup of the screen, and so on. Do some basic arithmetic based on better guesses about how long it takes to manually or automatically run those variations. Make a little 2 dimensional matrix that shows some of the key scenarios based on rough equivalence classes, figure out how much time it would be to a) write automation tests for each item based on previous experience and b) run manual smoke tests, if needed.
Figure out how often you'd need to run those tests during the scheduled timeline. Run a multiplier based on the probability of error (1.5x, 2.0x, sometimes 3.0x) in your judgment and the relative importance of getting it right. If it's really important that you get one feature well tested and less important that another feature be well tested, adjust your estimates accordingly, but identify that assumption in your estimate.
Scheduling is about managing risk, not eliminating it. It's meant to give you a big picture look at what needs to be done. The details are never quite right, and that's ok. I can't think of one time in a project that I've worked on that everything went according to plan, especially on the dev side.
Agile doesn't change the equation that much; it does change the timeline a bit. It's a good idea to make sure there's a little headroom for testing toward the end of a cycle, in spite of the dogma against it, because you also need dev time to fix the issues that will inevitably come up. But you don't have to turn this into "mini-waterfall"; the devs can keep working in the unlikely event that all the features are working, because they can always start to pick off tasks that were lower priority in the iteration.
I wonder if, in your case, the development team was making QA time estimates on your behalf? It's usually not a good idea to let someone else make the call on that. The people with the most skin in the game should have the most heavily weighted opinion. But a lot of developers can make pretty good risk assessments, so it's certainly worth listening to them. In Agile development cycles, the role exclusiveness should ideally be smaller than in Waterfall teams, but I am fairly convinced that some people are simply better at QA tasks, and they will naturally pick off most of that work, even in a team that tries to walk the ideology of Agile. If your problem was that you weren't willing to make estimates without knowing the implementation details, I can say that this is something you'll need to get over; even in old-school methodologies, I rarely had the luxury of complete knowledge.
One thing I would add is this: The people with QA talent should be on the same teams as their Development counterparts. It's ok if their professional development is managed by a different manager, but not ok if they are part of different sprint teams. So if you have a "test team sprint" and a "dev team sprint", in my humble opinion, you're crippling the potential for collaboration and communication between the Dev and QA -focused resources.