Caveat: how big is the data load? There is quite a difference between 20G of data with 200G of empty space (in which case I would suggest a shrink) and 2000G of data with 200G of empty space!
The drive is not in danger of filling up
Space is not needed else where on the SAN
In that case I would err on the side of leaving it as it is, especially if normal growth will consume that extra space in a reasonable amount of time. If a fair chunk of the unused space is at the end of the data file(s) then a truncate might be worth doing: this just frees the end of the file and doesn't move anything.
Backup times have double in duration (both full and differential)
As the backup process is bright enough not to bother reading and writing pages that are currently unused, that suggests that your data has a lot a part used pages rather than a lot of completely empty ones. Your backup file sizes have presumably grown by a similar order too. Running an index reorganise might be a better way of dealing with this than a DB shrink. After the reorganise you are likely to find you have more free space in the data files and some of the part used pages will be merged.
Having a lot of part used pages is more than a disk space use issue: it makes your use of RAM less efficient too because data is held in memory in page structure rather than being broken down differently. If all your pages were at most 50% used then you a potentially using twice as much memory as you otherwise could (or, if RAM is a limitation in your environment, you could be hitting disk a lot more often because you can only hold half as much in memory). This means running a reorganise over your indexes could affect application performance significantly so is probably a pretty good idea in that case.
You can get an indication of how efficiently your memory is being used by running:
SELECT
(CASE WHEN ([database_id] = 32767)
THEN N'Resource Database'
ELSE DB_NAME ([database_id]) END) AS [DatabaseName],
COUNT (*) * 8 / 1024 AS [MBUsed],
SUM (CAST ([free_space_in_bytes] AS BIGINT)) / (1024 * 1024) AS [MBEmpty]
FROM sys.dm_os_buffer_descriptors
GROUP BY [database_id]
HAVING COUNT(*)>1024;
(taken from http://www.sqlskills.com/blogs/paul/performance-issues-from-wasted-buffer-pool-memory/ which is worth a read for more detail on what I've just said)
Just to be clear: this is a case for index reorganisation, not a case for a database shrink.
Our DR SAN replication has slowed slightly.
Again this is likely due to having many part used pages: updates will be more scattered than if your data were more compact so more disk blocks are getting altered in the normal application load.
Remember that after a large operation like a shrink (or a significant set of index reorganisations/rebuilds) your next SAN replica will be considerably larger than normal because a lot of blocks on disk will have changed. The same for the next differential backup too.
Other than backup times increasing, are there ANY negative effects from a significant amount of unused space in a data file?
If the space is mainly in the form of completely unused pages, then no (and the backup times shouldn't be increased, but restore times may be). If the space is mainly spread throughout part used pages then yes, as discussed above.