-2

There are 2 purpose of posting this question:

  1. Finding a process to determine the best possible value that can be used for setting MAX and MIN memory of SQL Server on bare metal?

Note: I know the standard formula that we use for setting MIN and MAX memory for SQL Server hosted on a VM / Physical Server. So please don't close this question thinking I am asking how to set MIN and MAX memory. I am talking about the intricacies that comes with hosting multiple instances on a bare metal host.

  1. Understanding exactly how SQL Server Memory will behave in normal and memory pressure situations that comes at Baremetal Host Level?

Note: We are enabling Lock Pages in Memory for each SQL Server Instance.

Any links to whitepapers / articles that gives more info about dynamic memory allocations in bare metal, how sql server gets impacted with that would really help.

NOTE: Hyper-V is used on top of Bare-Metal Server. So basically we have one beefy bare metal server and with help of Hyper-V we are going to manage it and will deploy 10 VMs on top of that beefy bare metal host. Will do over-allocation and stuff (as told by Infra team). For example: D: Drive of Host will be mapped to D: Drive of all 10 VMs hosted and they all will share IOPS of D: Drive of host. Same will be case of Memory. They all will be sharing memory.

I hope this info helps.

1 Answers1

2

the intricacies that comes with hosting multiple instances on a bare metal host

The short answer is don't do it. In the long run there's nothing to be gained from hosting multiple instances on a bare-metal server. Just use a hypervisor (Hyper-V is free) and decouple your SQL Server instances from any particular host.

There are edge cases where running on bare metal makes sense for scale-up scenarios, but if you're hosting multiple instances, you should use virtualization.

The slightly-longer answer is to affinitize each instance to a a subset of the NUMA nodes (preferably 1), and set the min server memory for each instance to the tested minimum for production operation. And either

1) Don't set max server memory and allow the instances to contend for memory. Be aware that this will not end up with an optimal allocation across the instances, and you should periodically trim the memory use of instances by setting max server memory, waiting for the instance to trim its caches, and restore the setting. After observing this for some time you may want to switch to setting max server memory for some or all of the instances.

or

2) Set max server memory for all instances explicitly and monitor the Page Life Expectancy (PLE) of all instances and move memory from instances with high PLE to instances with low PLE. Also keeping in mind that it's optimal to keep memory allocations for the instances running on a NUMA node under the amount of memory available on that NUMA node. You you may want to move instances between NUMA nodes as you change the memory allocation.

David Browne - Microsoft
  • 49,000
  • 3
  • 53
  • 102