0

The question says it all. But, is it true? Can someone explain what's the differentiating factor here (how one type of disk is better than another for a specific purpose)?

(FYI, I've tried searching, and could fine just one article, which was written like 3 yrs ago.)

2 Answers2

1

Not anymore. You can quite well also run databases of SATA discs - WD Velociraptors are quite comparable to many SAS drives unless you get REALLY high performance (so, database != database).

The larger step is 3.5" to 2.5" - you save a lot of moeny (per gb) when you use large& slow 3.5" drives.

The differencing factors are:

  • SAS discs are normally faster and support longer command queues than SATA (32 limit vs. a LOT more).
  • SSD are nother story. Talking of 60.000 IOPS versus 450.

In general databases are heavy use when they get heavier, with totally random IO, so you dont count gigabytes or RPM, you cound IOPS (IO operations per second).

TomTom
  • 52,109
  • 7
  • 59
  • 142
0

Whilst I cant say to have ever heard of the rule, static content has advantages that database servers are unable to provide as easily.

SATA drives usually have lower mean times to failure, are slower and perform less well at random I/O. The upshot is their cost per GB is cheaper.

When you store media such as static content, the system typically will cache the content so its not normally read much from disk, this negates the need to have a high speed disk so much.

This is especially true for websites where 90% of the data presented makes up 99% of the requests.

Databases on the other hand typically have an IO profile that is a lot more random. Databases also usually dont rely on the caching subsystem of the kernel to manage its content, so it can benefit to using faster disks. They also tend to perform many more writes than static content which is where the SAS drives can really help.

Bear in mind, there are so many shades of grey that its not as simple or as straight forward as that.

  • Maybe your database is not used much?
  • Perhaps you know exactly what tables will be read and the probability of the data being used?
  • Maybe database writes are hardly ever done?
  • Perhaps the response times your looking and concurrency numbers you have can be acheived effectively with SATA?

I wouldn't say there is a 'rule of thumb'. Instead, why not get some I/O measurements for your application now and what you anticipate your need might be in 2 years. Do you think that SATA will be suitable then? What about SAS, would that be? Maybe SSD is better still? If your talking 30ms faster retrieval times with SAS, is it really worth spending more? Do your customers demand that 30ms extra speed?

For most of the work I do, the numbers being generated by our operation dont really warrant SAS. And - anyway the margin of performance you get versus the cost-per-GB isn't that attractive for me. Now with SSD on the market I am even less impressed with SAS offerings.

That doesnt mean I am saying SAS is not right for you. But figure out your cost per GB, figure out what your performance needs are probably going to be now and in a few years then compare the results against what your clients specific needs would be.

Matthew Ife
  • 24,261