I don't think there's a simple cookbook answer for this, because this depends very much on your environment. Whatever you come up with, I highly recommend a script-based approach, with the deployment scripts being in source control themselves. Those scripts will also allow better integration with build solutions (see below).
The simplest such script to run in your production environment would simply be the command to get latest (or get a specific version) from source control.
The next challenge is database deployment. The solution that I have come to like the most for small to medium-sized projects is to maintain a schema version table in each database and have all DDL and data update scripts in source control (including the data sources they use in compressed archives). The scripts are numbered consecutively (starting 000001 ..., 000002 ..., etc.) and the deployment script I run simply first backs up the existing database, then gets the last run database script from the schema version table, and then runs any new database scripts found in source control in correct order, updating the schema version table accordingly.
This approach allows me to rebuild database from scratch pretty quickly.
The two approaches taken together make it possible to quickly deploy your code base to several different staging machines, your QA environment, beta, etc.
For just a little bit more complex scenarios, you should run a continues integration build server, like Kieveli et. al. suggested, which essentially "rebuilds" your entire deployment regularly and therefore contains scripts to do exactly what you would run "manually" above.
Database deployment can also be made more sophisticated by creating a rollback script for each database script. You should then write a little controller app to handle those. There are several OSS solutions for this kind of stuff and one of them may fit your needs.
BUT, make sure you never auto-deploy your database to a production environment ;-)