Hey.
First answer for velocity, than my personal insight about testers in scrum non cross functional team and early days of every sprint.
I see there inconsistency. If team is not cross functional you distinguish testers and developers. In this case you must distinguish them also in velocity calculation. If the team is not cross functional testers don’t really increase your velocity. Your velocity will be max of what developers can implement but no more than what testers can test (if everything must be tested).
Talk to your scrum master, otherwise there will always be problems with velocity and estimation.
Now as for testers and early days of sprint. I work as tester in not cross functional team with 5 devs, so this answer may be bit personal.
You could solve this in two ways: a)change work organization by adding separate test sprint or b) change the way testers work.
In a) you crate separate testing sprint. It can happen in parallel to devs sprint (just shifted those few days) or you can make it happen once every two or three dev sprints.
I have heard about this solutions but I had never worked this way.
In b) you must ask testers to review their approach to testing activities. Maybe it depends on practices and tools you use, or process you follow but how can they have nothing to do in this early days? As I mentioned before I work as tester with 5 developers in not cross functional team. If I would wait with my work until developer ends his task, I would never test all features in given sprint. Unless your testers perform only exploratory testing they should have things to do before feature is released to test environment. There are some activities that can be done (or must be done) before tester gets feature/code into his hands. The following is what I do before features are released to test environment:
- go through requirements for features to be implemented
- design test scripts (high level design)
- prepare draft test cases
- go through possible test data (if change that is being implemented is manipulating data in the system you need to make snapshot of this data, to compare it later with what given feature will do to this data)
- wrap up everything in test suites
- communicate with developer as feature is being developed - this way you can get better understanding of implemented solution (instead of asking when he has his mind already on other feature)
- you can make any necessary changes to test cases as feature evolves
Then when feature is complete you:
- flesh out test cases with any details that not known to you earlier (it is trivial, but button name can change, or some additional step in wizard appears)
- perform tests
- rise issues
.
Actually I find my self to spend more time on first part (designing tests, and preparing test scripts in appropriate tool) than actually performing those tests.
If they to all they can right away instead of waiting for code to be released to test environment it should help with this initial gap and it will minimize risk of testers not finishing their activities before end of sprint.
Of course there will always be less for testers to do in the beginning and more in the end, but you can try to minimize this difference. And even when above still leaves them lots of time to waste at the beginning you can give them no coding involved tasks. Some configuration, some maintenance, documentation update, other.