56

Computers mainly need three voltages to work : +12V, +5V and +3,3V, all of them are DC.

Why can't we just have a few (for redundancy) big power supply providing these three voltages to the entire datacenter, and servers directly using it ?

That would be more efficient since converting power always has losses, it's more efficient to do it one single time than do it each time in each server's PSU. Also it'll be better for UPSes since they can use 12V batteries to directly power the entire 12V grid of the datacenter instead of transforming the 12V DC into 120/240 AC which is quite inefficient.

6 Answers6

61

What'cha talking 'bout Willis? You can get 48V PSUs for most servers today.

Running 12V DC over medium/long distance suffers from Voltage Drop, whereas 120V AC doesn't have this problem¹. Big losses there. Run high voltage AC to the rack, convert it there.

The problem with 12V over long distance is you need higher amperage to transmit the same amount of power and higher amperage is less efficient and requires larger conductors.

The Open Compute Open Rack design uses 12V rails inside a rack to distribute power to components.

Also large UPSes don't turn 12V DC into 120V AC - they typically use 10 or 20 batteries hooked in series (and then parallel banks of those) to provide 120V or 240V DC and then invert that into AC.

So yes, we're there already for custom installations but there's a fair bit of an overhead to get going and commodity hardware generally doesn't support that.

Non sequitor: measuring is difficult.

1: I lie, it does, but less than DC.

MikeyB
  • 40,079
18

It's not necessarily more efficient as you increase the I^2R losses. Reduce the voltage and you have to increase current in proportion but the resistive loss (not to mention the voltage drop) of power cables increases in proportion to the square of the current. Thus you need massive, thick cables too, using more copper.

Telcos use typically -48V so they still need power supplies in servers - inverters - to make the DC level conversion which is a conversion to AC then back again. The cables are much thicker.

So it's not necessarily a great idea to run everything on DC for efficiency.

xcxc
  • 403
11

Telcos have used DC in their central offices nearly exclusively, historically. In what seems to be a recurring pattern in computing, I'd argue that the IT industry moving to DC and, effectively, re-inventing the "wheel" that telcos already invented years ago is just par for the course.

The last few years have seen various articles talking about using DC power to make datacenters more efficient. I know that Facebook and Google (as referenced in that last link) are both big DC power users. I think it's just a matter of time before commodity hosting moves that direction, too.

Given the entrenched nature of AC power, though, it's going to take time.

Evan Anderson
  • 142,957
6

As pointed out above, high current = high losses and thick cables.

Another prohibiting factor is that high current leads to a fire risk; remember that 100A is sufficient to perform arc-welding.

3

Basically the reason for using higher voltage AC is that we want to minimize power loss and make savings.

  1. P=UI, means power (W) is voltage (V) multiplied by current (A). You need some power for a HW. You have choice for the voltage, but the current will varies accordingly. This is true both for DC and AC. This leads to a first problem and its solution.

  2. Losses are proportional to current and resistance (U=RI). The more current, the more loss in the form of heat. So you need to favor higher voltage to minimize current and losses. But if you need 3 V for the HW and choose 100 V for the power supply, then you need to transform 100 V to 3 V at a point close to the HW input. This leads to a second problem and its solution.

  3. It is (actually it was) difficult to transform DC voltages, specially without too much losses. We need to use active and expensive switched-mode power supplies. In contrary it is easy to change AC voltages using a transformer (two simple static coils, using magnetic field).

  4. Conclusion based on previous choices: it is better to use higher voltage, which then must be AC to allow easy voltage conversion.

Engineers will compare cost electrical losses / failures and cost of voltage conversion for a specific problem, and then see which is cheaper. Add to this impact of failures, etc.

Today we start to see voltage converters for DC that are effective and less expensive. So best solutions may change in the future.

mins
  • 131
2

It likely boils down to money. 120VAC power supplies are readily available by the truckload, the market for high-capacity smooth 12/5/3.3VDC supplies is rather small: there are far more single computers out there than datacenters. As mentioned in other answers, it's unlikely that any datacenter will put 12v in the wall plugs and the converter in the basement - more likely the opposite: plenty of commercial buildings use 480v for primary lighting as they can run many more fixtures on one circuit. Running 240VAC to the racks makes more sense than 12VDC, but I expect the future will see two large PSUs in the top of each rack and 4-pin power plugs for each server within that rack.

paul
  • 49