I strongly recommend they look at transactional replication. I recommend against triggers simply because they rely on the network connection and the other server being up. There are potentially some other solutions depending on whether you care about replicating the whole database or whether you can afford to have a delay in synchronization.
If you only need to update the secondary system once a day, then you could set up very simple log shipping. You simply restore a copy of the full database over top of the existing copy once a night. This will kick out active users.
For more flexibility (e.g. to only include certain tables, and to minimize disruption to uses) you could write your own methodology, which I've used in a few solutions. First, create two additional schemas in the destination database:
CREATE SCHEMA hold AUTHORIZATION dbo;
GO
CREATE SCHEMA cache AUTHORIZATION dbo;
GO
Then create a copy of each dbo table in the cache schema:
CREATE TABLE cache.TableName
(
Columns...
);
GO
Don't forget indexes but be mindful of foreign keys if there are any in the database that point at this table.
Now however often you think this data needs to be refreshed, you can run a job that does the following - puts a fresh copy of the source table data into the "hidden" copy of the table:
TRUNCATE TABLE cache.TableName;
INSERT cache.TableName(columns) EXEC LinkedServer.database.dbo.StoredProcedure;
Where the remote Stored Procedure just does a standard SELECT columns FROM dbo.TableName; on the source server. Doing it this way allows you to keep the query logic, isolation level and other factors on the source and out of the distributed query scaffolding.
Now, the magic sauce, we can use a fast, metadata operation that should result in minimal blocking and disruption to end users. This is basically playing switcheroo with the active data.
BEGIN TRANSACTION;
ALTER SCHEMA hold TRANSFER dbo.TableName;
ALTER SCHEMA dbo TRANSFER cache.TableName;
ALTER SCHEMA cache TRANSFER hold.TableName;
COMMIT TRANSACTION;
After this you can optionally truncate the cache copy, however I always kept it until the next switcheroo for troubleshooting purposes (and easy rollback).
The nice thing about this one is that it is very easy to run this as frequently or infrequently as you like, and since 99% of the work is being done against tables the application and users aren't looking at, there is very little impact to end users.