3

I need some advice before we engage in some costly development work

We have two SQL Server database servers on the same network. One runs an in house database and the other a proprietary database that we bought in.

We want a couple of tables between the two to be synchronised (customer details / contact details)

Now if I were doing this I would create a trigger on a table in database A that when updated calls a stored procedure that updates a table in database B. And vice versa to keep the records in synch with an appropriate flag that activates the trigger.

The creators of the proprietary database that we need to interface with have proposed creating a network service that picks up changes in our database and extracts the data back to their own. And the transfer can only be one way by their service because of 'record locking'. Without making any assumptions is this really the best way to do things?

Thanks

NB both databases are running on SQL SERVER 2005

Thomas Stringer
  • 42,434
  • 9
  • 120
  • 155

1 Answers1

4

I strongly recommend they look at transactional replication. I recommend against triggers simply because they rely on the network connection and the other server being up. There are potentially some other solutions depending on whether you care about replicating the whole database or whether you can afford to have a delay in synchronization.

If you only need to update the secondary system once a day, then you could set up very simple log shipping. You simply restore a copy of the full database over top of the existing copy once a night. This will kick out active users.

For more flexibility (e.g. to only include certain tables, and to minimize disruption to uses) you could write your own methodology, which I've used in a few solutions. First, create two additional schemas in the destination database:

CREATE SCHEMA hold AUTHORIZATION dbo;
GO
CREATE SCHEMA cache AUTHORIZATION dbo;
GO

Then create a copy of each dbo table in the cache schema:

CREATE TABLE cache.TableName
(
  Columns...
);
GO

Don't forget indexes but be mindful of foreign keys if there are any in the database that point at this table.

Now however often you think this data needs to be refreshed, you can run a job that does the following - puts a fresh copy of the source table data into the "hidden" copy of the table:

TRUNCATE TABLE cache.TableName;

INSERT cache.TableName(columns) EXEC LinkedServer.database.dbo.StoredProcedure;

Where the remote Stored Procedure just does a standard SELECT columns FROM dbo.TableName; on the source server. Doing it this way allows you to keep the query logic, isolation level and other factors on the source and out of the distributed query scaffolding.

Now, the magic sauce, we can use a fast, metadata operation that should result in minimal blocking and disruption to end users. This is basically playing switcheroo with the active data.

BEGIN TRANSACTION;
  ALTER SCHEMA hold  TRANSFER dbo.TableName;
  ALTER SCHEMA dbo   TRANSFER cache.TableName;
  ALTER SCHEMA cache TRANSFER hold.TableName;
COMMIT TRANSACTION;

After this you can optionally truncate the cache copy, however I always kept it until the next switcheroo for troubleshooting purposes (and easy rollback).

The nice thing about this one is that it is very easy to run this as frequently or infrequently as you like, and since 99% of the work is being done against tables the application and users aren't looking at, there is very little impact to end users.

Aaron Bertrand
  • 181,950
  • 28
  • 405
  • 624