8

My company is moving our infrastructure to new data center and I am trying to figure out the best way to keep the database on the new server in sync with the current production database until the new environments are ready to go live. Not being a full-time DBA, I did some research and from what I have read, it looks like a transnational replication setup would best suite our needs.

Some details: the production DB is around 90 GBs in size and using Robocopy it took about 9 hours to move a copy of it to one of the new servers. The current production database will need to stay online and accessible throughout the entire migration process. It's in simple recovery, so database mirroring is unavailable.

Is a transactional replication the best method to keep the databases in sync?

My plan:

  1. (Done) Transfer the current database and log to the new server and attach it to the new instance of SQL Server
  2. Setup a distributor on our development database machine and publish to it from the production database
  3. Create a subscriber on the new database machine which will accept updates that are pushed out from the distributor, once each night

There are two things on my mind. Transactional replication requires that each published table have a primary key and a lot of the tables in the production database do not have primary keys defined. I don't think this will be too big of an issue since my main concern is just getting the databases in sync. We'll test the different applications that the use the database at a later data, but I'd like to make sure it's not a serious issue. Secondly, do I need to also move any associated system DBs from the original instance, such as master? We're moving to an Active Directory setup in the new environment, so I don't care about the users and such, but I am unsure about the necessity of the system DBs.

And just in general, am I grasping these concepts correctly?

Paul White
  • 94,921
  • 30
  • 437
  • 687
Snake_Plissken
  • 206
  • 1
  • 3

2 Answers2

9

Your current situation :

  • 90 GB database that you want to move to a new data center.
  • Database is in simple recovery.
  • You are thinking of using T-Rep to keep the data in sync.
  • There are many tables in the database that do not have Primary Key - this is a requirement for any tables to get published.
  • You already have a backup copied to the destination data-center.

I see many drawbacks in your approach :

  • T-Rep is not easy to setup as compared to other technologies. If there is any schema changes then it requires a new snapshot.
  • If you are going with T-REP then you have to make schema changes - add Primary Key to tables that dont have.
  • Doing any change to your existing database, your application has to be fully tested to avoid any unexpected behavior.
  • If your database is having large number of transactions and depending on network bandwidth between the 2 datacenters, there will be replication latency as well.

Based on my experience, below is my recommendation :

  • Change your database to full recovery. This is not a major impact as compared to creating PKs.
  • Ship the full backup of the source database - take backup with compression and enable instant file initialization. This will help you to cut down the restore time on destination data-center.
  • Restore database on the destination server WITH NORECOVERY.
  • Implement Logshipping and initialize it from the backup.
  • Have logshipping ship the logs every 1 min.
  • During the day of failover, take a last tail log backup on source and restore it on the destination using WITH RECOVERY. This will bring the destination database online.
  • Once the destination database is online, you have to sync up users.
  • You should change your web.config to point it to the new server.

You do not have to move any system databases. Make sure you do all the prep work before hand - script out logins, jobs, ssis packages, etc and create them on the destination server.

Refer to this answer for post restore steps and other best practices.

Note: You can implement Database mirroring (SYNC or ASYNC depending on the edition of sql server you are using) as well, but logshipping is just simple to implement and if you test it, it wont disappoint you. I have moved successfully terabyte database from one datacenter to another using the above technique and it works perfectly fine.

The current production database will need to stay online and accessible throughout the entire migration process.

There will always be a downtime and you have to schedule it. Even if you invest paying high amount of money in solutions like clustering, when you failover, there will be some downtime. You have to balance out how much your company can invest in having a near to zero downtime vs what is available with some acceptable downtime.

Kin Shah
  • 62,545
  • 6
  • 124
  • 245
0

Not had a chance to use this myself and I think it's still in preview, but SQL Data Sync on the Azure platform could be a potential option:

https://azure.microsoft.com/en-gb/documentation/articles/sql-database-get-started-sql-data-sync/

It works with on-premise as well as Azure SQL databases and seems like a useful tool for keeping databases in sync.

steoleary
  • 1,677
  • 18
  • 15