I have tried both RedGate and Visual Studio's database project and I prefer the storing the database definition in the database project.
As soon as the database becomes part of the solution, you can use your preferred source control provider. Most have excellent Visual Studio integration.
With the SSDT tools you have the 'lastest version' of the database definition, allowing you to easily make schema comparisons and generate schema upgrade scripts.
That said, the schema is usually only a part of the equasion. In real life it turns out that databases allready have a lots of data. And my users tend to get rather disappointed when they loose it.
So as soon as I rolled out v1.0 the need arrises to maintain upgrade scripts. Sometimes these just contain schema changes, but many times I need to create defaults based on the content of some other table, need to release a particular constraint until I seeded the data etc. Usually simply upgrading the schema does not quite cut it. My preference is to have these upgrade script in a separate folder in the database project too. These would usually look like 'upgrade from v1.0 to v1.1'.
My databases always have a reference table that tells me the current version number, so I can block incompatible upgrades. The first statement in my upgrade scripts check the current version and bail out if it's different from what's expected.
Another benefit from the database projects is to be able to deploy different sets of data based on the same schema. I have a different datasets for development, the QA team, user acceptence test and for automated integration tests. Since a database project can have only 1 post-deploy script, the trick here is to make a new database project that references the 'master' project and to make the custom dataset part of the post deployment proces of that project.
These were my 2 cents, Whatever proces you come up, above all, it must fit you and your team and hopefully support you with most of the common tasks.