8

Assuming constant memory (32gb) and CPU (4), 2 x disk arrays, I have the following disks

  • 2 x 150 (10k)
  • 6 x 150 (15k)

They are all local disks.

My requirements

  • My DB is 350gb and set to default 10% growth
  • My OS & SQL Server are Server 2k8R2 (C: drive OS + page + applications = 55Gb)
  • Log requirements are about 70gb and set to default 10% growth and is routinely truncated
  • My TempDb is about 12gb currently and set to default 10% growth

My problem is that I’m trying to understand where to best put the TempDB and OS and the Log. My experience is limited in optimal configuration of these two

This is not an online transactional system. It has heavy data write (new data + indexes rebuild/reorg) then heavy data read (I'm estimating at about 50/50) processing for about 13 hours, and then just quiet.

My understanding is that the TEMPDB is heavily used during normal processing compared to the log.

My idea is the following

  • 2 x 150g (15k) Raid 1 = 150g for OS + TempDB
  • 2 x 150g (10k) Raid 1 = 150g for LOG (note slower disks here)
  • 4 x 150g (15k) Raid 5 = 150g for data

Does this sound like a good idea? I could then swap the Log + TempDB if needed.

Am I breaking a cardinal rules like never put TempDB on OS disk due to paging concerns, or perhaps never put log on slower disk than data?

Edit:

We also have a SSAS on the system and the end users access only the Cube. The 50% read above is based on the time it takes to process SSAS database.

Preet Sangha
  • 907
  • 4
  • 11
  • 22

2 Answers2

7

2 * 10k RAID1 for OS, 6 * 15k RAID10 for everything else. Honestly, with this many disks 1 array is the safest and usually fastest bet.

If you've got time to test and have a real world, repeatable, measurable workload then by all means do a test run with your tempdb on the OS drive (caveat: limit tempdb file growth to ensure you don't splat the OS). Flip side, you may see moderate improvements to your data load and maintenance with the log there instead so worth a test run or two if time permits.

Mark Storey-Smith
  • 31,860
  • 9
  • 90
  • 125
3

I agree with Mark in that the ideal approach is to put the two slower drives in RAID 1 for O/S only, and the rest of the drives in RAID 10 for everything else. That would be ideal.

Based on the sizes you gave, though, this pretty much maxes you out to start, with very little margin for error, and no room for growth. And by the way, if someone told you the drives are 150 GB, they may actually be smaller than that (146 GB?), unformatted, which likely means not everything will fit right off the bat.

Unfortunately, the workload involves heavy writes, and for that, RAID 5... is not your friend.

If you can manage to wrangle a bit of extra budget, there are a couple approaches:

  • Two more 15k drives of the same size. Depending on what "2 x disk arrays" means, > 8 total drives might mean a new RAID controller is needed. (And/or a larger chassis, possibly.)

  • Two ~120 GB SSDs (either PCI-express or SATA) in software RAID 1 for TempDB data/log and the database log file. This may actually be the fastest solution, period, and could cost considerably less than 2 enterprise-class 15k drives (let alone a comparable RAID controller). This assumes there are available slots/ports on the motherboard, which there probably are.

If management won't move on budget (this situation smells like old hardware is being repurposed for a new project... which means the budget is zero), you're going to have to use RAID 5 for space reasons, because there's no way to avoid it without more disks available. At that point, the best move is probably putting the 2 slower disks in RAID 1 for O/S only, and the rest in RAID 5 to maximize space. If you're forced to use RAID 5 for any part of this, management must sign off (in writing) on understanding what that means in terms of performance.

Jon Seigel
  • 16,922
  • 6
  • 45
  • 85