From: andrew@cucumber.demon.co.uk   
      
   In article ,   
    "nish" writes:   
   > If I have a server that uses 700 watts at 120VAC, and it can be equally (or   
   > more) efficient being fed at 240VAC single phase, then aren't I better off   
   > feeding it at 240VAC? If the building is being fed at 240VAC single   
   > phase, don't I avoid a substantial energy loss by dropping to 120VAC from   
   > 240VAC? Or do you get the 120VAC for free without having to employ a step   
   > down transformer?   
      
   We did some tests a few years ago, and found auto-ranging switched mode   
   computer power supplies were more efficient when fed near the top of their   
   range than at the bottom. For 120-240V autoranging supplies, difference   
   was typically around 8% higher power consumption, but in some cases it was   
   15% higher. (This made me think some "80+" PSUs were unklikely to be "80+"   
   when run on 120V, but we didn't measure the absolute efficiency, only the   
   change in power draw between running on 120V or 240V.)   
      
   This wasn't really a surprise - the I2R heating losses in the rectifier,   
   power FETs, and transformer primary winding are going to be 4x higher at   
   half the supply voltage.   
      
   Some of the server power supplies had to be derated when run at 120V, and   
   in some cases this meant you lost redundancy of dual power supplies   
   because both were needed to power the server.   
      
   This work was triggered by stats which showed we were getting 10x more   
   mains wiring accessory burnouts in the US data centres than anywhere   
   else in the world. Some of this was put down to servers using 120V   
   outlets, but the incidents were still significantly more common even   
   on US 240V circuits than on 220-240V circuits in other countries.   
      
   It was difficult to compare data, but we also suspected we were getting   
   more PSU failures in systems running at the lower end of the input voltage   
   range.   
      
   We never did collect data on hold-up times over short brown-outs, but I   
   suspect that as the storage capacitors store 4x more energy at twice the   
   supply voltage, systems would survive at least 4x longer power interruption   
   without going down.   
      
   > I am trying to identify the sources of energy losses in the system so I can   
   > avoid steps that are subject to those losses.   
   >   
   > In a related question, I was asking what is the most *energy efficient* way   
   > to go from 240VAC single phase to a 12VDC or 5VDC device? The AC-to-DC   
   > adapters you typically see for small devices like a network switch are only   
   > about 65% efficient, so you are losing a lot of energy in the conversion and   
   > step down. Modern PC server power supplies - to contrast - are about 85%   
   > to 95% efficient, so those avoid most of those losses. Aren't there   
   > rectifiers with similar efficiency that I could use to power various DC   
   > devices?   
      
   I steal 12V supply from by PC to power the ethernet switch and the WiFi   
   access point. These items came with wall warts which got particularly hot   
   implying very inefficient. This was a while back, and such inefficient   
   PSUs are no longer permitted in the EU. Nowadays, wall wart PSUs have to   
   be efficient at a level only achievable by good switched mode PSUs and   
   have to have very low consumption when there's no load (i.e. they stay   
   stone cold, and are usually < 0.1W).   
      
   One place you do want a separate isolated supply is powering anything   
   which interfaces your phone line, such as a modem (even though most   
   are well isolated from the line anyway).   
      
   --   
   Andrew Gabriel   
   [email address is not usable -- followup in the newsgroup]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|