This is best handled as two separate subjects. On the one hand, you want a solid and consistant database schema (tables, indexes, vies, procedures, functions, and also lookup values and any non-changing "static" data required by your system), and you want version control over that so you can track what changes over time (and by who) and also can control when the changes get applied to which database instances. Prior posts to this question have covered this subject well.
On the other hand, you will need the database populated with data against which you can test and devlop new code. Defining and loading such data is not the same as defining and loading the structures that will hold it. While managing database definitions via source control can be a readily solved problem, over the past many years I have never heard of an equally simple (well, relatively simple) solution for addressing the data problem. Aspects of the problem include:
Make sure there's enough data. Adding 10-20 rows per table is easy, but you can't possible predict performance if your live databases will contain millions of rows or more.
A quick and easy solution is to get a copy of the lastest Production database, update it with the recent changes, and off you go. This can be tricky if the development environment doesn't have a SAN upon which to host a copy of the multi-TB of Production data you're supporting
Similarly, the SOX and/or HIPAA auditors might not want extra copies of potentially confidential data sitting on not-so-secure development servers (in front of not-so-secure developers--we are a shifty bunch, after all). You might need to scramble or randomize sensitive data before making it available to developers... which implies an interim "scrambler" process to sanitize the data. (Perhaps another SAN for all those TB?)
In some situations, it'd be ideal for some department or other to provide you with a correct, coherent, and coordinated set of data to do development against -- something they make up to cover all likely possible situations, and that they could use for testing on their side (knowing what goes in, they know what should be coming out, and can check for it). Of course the effort to create such a set of data is substantial, and convincing non-IT groups to provide such data sets may be politically impossible. But it's a nice dream.
And of course the data changes. After you've worked the copy over in development for a week, a month, a quarter, eventually and inevitably you will discover that the Production data doesn't "look" like that any more -- usage patterns will have changed, averages of significant values will drift, all your dates will be old and irrelevant... whatever, you'll need to get fresh data all over again.
It's an ugly problem with no simple solution that I've ever heard of. One possibility that could help: I recall reading articles in the past of products that can be used to "stuff" a database with made up yet statistically relevant data. You specify things like "10,000 rows in this table, this col is an identity primary key, this tinyint ranges from 1-10 with equal distribution, this varchar ranges from 6 to 30 characters with maybe 2% duplicates", and so forth. Something like this might be invaluable, but it all depends upon t he circumstances in which you find yourself.