A set of related checklists may be more powerful than a set of detailed scripts. Assume that unit testing is done as part of the dev process and don't create a checklist for that test level. For the higher test levels, create a checklist by hosting a workshop and collaborating with dev, test, and ideally some customer proxy if you can't actually include customers.
The advantage of checklists is that they support many different types of tests, from manual to automated. They are easy to revise and improve over time. They work well as a way of collecting evidence that tests were run since each item can be checked off along with a link that points back to a folder where there might be evidence that the item was run.
Further, they do not assume that your testers have no knowledge of how to use the system - in fact they encourage some investigative-style testing that might go beyond the bounds of a detailed script. A problem with detailed scripts is that they become the only thing the testers go through and they become like a form to fill out instead of a guide for thinking about testing.
Step 1. Choose your test levels. Define what each 'level' means to the group. For example, you might decide that you have functional testing, system testing, and acceptance testing as the three test levels for the product. You can make these levels thinner for minor releases and thicker for major releases but the idea should be to stick to using your test levels for every release.
Step 2. Host a workshop to create checklists (or testing backlogs). Each test level should have pass/fail criteria and an accompanying checklist of things that have to be tested for the level to be considered complete. Don't make the checklist items 'test cases' - make them the requirements/features instead. Each test level will have a different type of checklist item. For example, you could choose to define 'system testing' as tests run at the use case level. So you have one set of tests to run for each use case, and you measure progress by how many use cases have thus far been tested adequately.
Step 3. In that same workshop, define a testing workflow for each test level. So that it is clear who does what when. This is where you can discuss how much of each checklist needs to be run for each release if not them all (defining thin and thick versions based on minor/major releases). You can also discuss how much test automation you can afford at each level (that is an economic consideration after all much more than a technical consideration). Testing workflow should include who publishes test results and where, and who evaluates the test process including the checklists. Ideally there is some element of feedback in the test workflows at each level so that the organization has a chance to continually improve.
Step 4. If you have a discrete testing phases then you can use a burn down chart to track the test progress. For example, if user acceptance testing takes two weeks for a major release, then you can create a burn down chart showing actual progress against ideal progress based on the number of items in the checklist that have been completed versus how many have to be completed in that period of time. You may have to weight the checklist items in order to make this more accurate (since some checklist items might take longer to test).
Step 5. Prepare the team for giving/receiving feedback without knowing the exact details of test cases and test scripts that will be run; using checklists might be an adjustment if you are used to having detailed scripts. Prepare for that.
Bottom line: use checklists. Define them as a group and continually work them as a group.