It depends on the environment, but I'd suggest all the following, although it's by no means a complete list:
- Retest the manual steps on the OLD build to make sure they can reliably reproduce the issue
- Retest the steps on the new build, and confirm that the issue no longer occurs
- When doing so, confirm that while there is no issue, that the correct behaviour occurs
- Consider affected regions of product. Run some sanity tests on those areas to confirm nothing has been unduly hit by this change
- Check related areas of product - areas the code interacts with, and make sure they are still working (sanity test)
- If you have any regression tests, these could be run or a selected subset chosen for execution
- Any automated tests should also be run over the area.
If you have SDETs - Software Developers in Test, if feasible they can peer review the code from the fix as well, this depends entirely on your work environment of course.
If documentation of the fix for release notes is required, the tester should also confirm that either these docs exist (if the developer is meant to write them) and are accurate, or if he/she is meant to write them, that these be done.
Similarly, automated tests that need to be built should be run against the old build first to confirm they identify the issue correctly and reliably, and then against the new build to confirm they pass it successfully.
There are a number of other business-dependent things that can be checked - further quality processes like peer-reviewing the automated test code, or confirming with scenario validation testers that the fix makes sense for the environment it is going into.