I have a 1.4TB SQL Server Database that's struggling massively with disk I/O. We've installed a new SSD array into the server that will solve all of our problems, we're just debating the best way to move the database across. Ideally if we can do it without downtime, that's best. But where the choice is between two days of poor performance (e.g. while copying data) versus two hours of downtime, the latter might be preferable.
So far, the solutions we've come up with are:
Simple copy. Take the DB offline, copy the files across, change the locations in SQL Server and bring it back online. Rough figures estimate this will take up to five hours, which is not really acceptable, but it is the easiest solution.
Block-level copy. Using an rsync-like utility, we copy the files across in the background while the DB is up. When we're ready to migrate, we take the DB offline, do a differential copy across using this utility, then point SQL server at the new files and bring it online. Timing here is unknown. We don't know how long it will take to do a differential analysis of 1.4TB and copy that across. Our other concern is that the block-level copy will leave the files in some state unreadable by SQL Server and we'll have waste our time.
SQL migration. Create a new 1.4TB SQL data file on the new disk and disable autogrowth on all of the other files. Then run DBBC SHRINKFILE(-file_name-, EMPTYFILE) on all of the other data files in turn. Once all of the data is across, I'll take a scheduled window at some point to move the MDF file over to the SSD and remove the other unused files. I like this because it minimises downtime. But I have no idea how long this will take and whether it will cause degradation in performance while it's happening.
We don't have any kind of load & performance environment to test this. I can verify that the strategies will work on our staging environment, but not the impact and not the performance.