You may be running into a scenario where the (perhaps default) sampling percentage isn't sufficient for the stats to be useful for longer periods of time. You can quickly check to see what sample rates were used during the last UPDATE STATISTICS statement via the following query:
SELECT OBJECT_SCHEMA_NAME(st.object_id) + '.' + OBJECT_NAME(st.object_id) AS TableName
, col.name AS ColumnName
, st.name AS StatsName
, sp.last_updated
, sp.rows_sampled
, sp.rows
, (1.0*sp.rows_sampled)/(1.0*sp.rows) AS sample_pct
FROM sys.stats st
INNER JOIN sys.stats_columns st_col
ON st.object_id = st_col.object_id
AND st.stats_id = st_col.stats_id
INNER JOIN sys.columns col
ON st_col.object_id = col.object_id
AND st_col.column_id = col.column_id
CROSS APPLY sys.dm_db_stats_properties (st.object_id, st.stats_id) sp
--WHERE OBJECT_SCHEMA_NAME(st.object_id) + '.' + OBJECT_NAME(st.object_id) = 'dbo.Mytable' -- <-- uncomment to filter for a specific table
ORDER BY 1, 2
If you filter for the tables causing your issues and you see the sample rates are low, you may want to experiment with increasing the sample rate to see if the stats remain relevant for longer periods of time. You may even find FULLSCANs are necessary. Here are a couple of examples showing what I'm talking about in regards to increasing the sample rate:
-- reference 20 percent of the column table when building the stat
UPDATE STATISTICS dbo.MyTable myTableStatName WITH SAMPLE 20 PERCENT
-- or -
-- reference all the column data when building the stat
UPDATE STATISTICS dbo.MyTable myTableStatName WITH FULLSCAN
Of note, if you do find that increasing the sample rate improves the lifespan of the stats AND you're on SQL Server 2016 SP1 CU4 (or SQL Server 2017 CU1) or later, a new keyword was included PERSIST_SAMPLE_PERCENT for the UPDATE STATISTICS command. This keyword forces any auto-update statistics runs against said statistic to use whatever sample percentage you manually specified. Without including this keyword, default sampling will be used during auto-update statistics runs which can become a major headache if you require a higher sampling percent.
The Tiger Team released a nice blog post on the subject which I recommend you check out if you want some more in-depth information on this new keyword.
Thanks to @RandiVertongen's comment, as outlined in Erin Stellato's blog post it will likely make more sense to try a FULLSCAN before choosing a larger SAMPLE percentage for your initial testing.