7

I have a production box with 5 instances on.

Version: SQL Server 2014 SP3 Enterprise.

I discovered that even though the machine has ~400GB memory assigned, the combined MAX MEMORY settings on all 5 instances is less than 200GB.

The instances don't suffer from memory pressure, but as we already have that memory assigned, it's a waste not to use it.

I would like to increase it to higher value, leaving 10% for the OS (some instances will be assigned higher than others).

However, I've never done such a large increase of this setting before.

I know it’s a dynamic settings which doesn’t require a restart, however, I'm wondering on the following:

  • Should I make the increase in 2 phases (i.e increase half the amount, wait a week and then increase it again )?
  • Can something suddenly break (or make a performance degradation) if SQL server suddenly has so much more memory to play with ?

We don't have lock pages in memory setting enabled, and we don't have traceflag 834 on.

Shanky
  • 19,148
  • 4
  • 37
  • 58
Roni Vered
  • 585
  • 7
  • 17

2 Answers2

2

I have a slightly different opinion.

If you are not experiencing memory pressure and your application is just working fine, why introduce an unknown variable (in your case - increasing sql server max memory) ? You have not tested this change, so its a risk.

If you have more bandwidth on your server, you can look into consolidating more instances on your server.

Can something suddenly break (or make a performance degradation) if SQL server suddenly has so much more memory to play with ?

Yes, and I faced a perf degradation issue with PROD having more memory and using TF 2335 fixed the issue.

you can read Paul's answer for more details but the gist is :

The potential size of the buffer pool also affects the optimizer's cost model for data access.

Kin Shah
  • 62,545
  • 6
  • 124
  • 245
0

You can change SQL Server max server memory anytime but when you do so SQL Server plan, data and other caches would be flushed. This means new queries hitting the SQL Server will go again though process of query plan creation which may be deciding factor some time. So if you really want to do it inform stakeholders that you want to change max server memory and do it when load is relatively very less. Please also inform people that due to cold cache few queries might take longer to execute during their first run.

Should I make the increase in 2 phases (i.e increase half the amount, wait a week and then increase it again )?

Just do it when you get approval and when load is relatively very less, there is no need to do it in phases.

Can something suddenly break (or make a performance degradation) if SQL server suddenly has so much more memory to play with ?

Things will not break but yes, since cache is flushed few queries may start taking more time to execute on their first run.

We don't have lock pages in memory setting enabled,

I suggest you do that specially when you have 5 instances running. Any instance, due to some rouge query/process, should not be allowed to hog all memory. it is better to set max limit for all SQL Server instances. Also note in some cases SQL Server can consume more than max limit. Adding LPIM requires "SQL Server restart"

Ideal situation: When you plan to add LPIM and that point change max server memory, since you are not facing any issue you have time.

Shanky
  • 19,148
  • 4
  • 37
  • 58