Back to Basics: Understanding and Mitigating the Growing Problem of Distribution Losses

December 27, 2012
By

There are many articles available describing how to overcome power distribution losses with suitable power conversion devices and architectures. This post goes back to basics: explaining the underlying issues of distribution losses and the principles of their solution.

If DC-DC converters and power conversion devices are any part of your working life, you will be aware of their key role in conducting electrical power from the edge of a printed circuit board (PCB) to its onboard components efficiently. You may also be aware of Vicor’s Intermediate Bus Architecture (IBA) and Factorized Power Architecture and other technologies designed to maximize this efficiency. The objective is to minimize distribution losses as the power flows from the PCB connector to its onboard point of load.

However you may not be aware of the fundamentals – what causes distribution losses, why does advancing technology exacerbate their impact, and how do power devices and architectures work to mitigate this? We provide answers to these questions below as we explain the underlying problems that FPA and other power architectures are designed to solve.

Even with mitigating technologies, distribution losses – or, more definitively as we shall see, I2R losses – are an inevitable part of transmitting power from one place to another. This is equally true of mains power transmitted across the National Grid or of low voltage DC power applied to a PCB. The reason is that the conducting medium, whether it is an overhead cable or a PCB track, always has a finite resistance, and power is lost, or dissipated, in overcoming this resistance. The remaining power is dissipated by the device being supplied; a microprocessor chip, for example.

To understand more about the nature of electrical power dissipation, let’s start with Fig. 1:

Distribution Losses - Ohms Law

Fig. 1: Voltage applied across a load

In Fig. 1, a DC voltage V volts is being applied across a resistive load R ohms. The load is drawing a current I amps. At this point, we need to introduce a couple of formulae; however they’re the only two we have to understand. We can derive everything else from them.

Firstly, there’s Ohm’s Law, which says
equation 3Where V is the voltage applied across (say) a device, R is its resistance, and I is the current it draws. For a fixed voltage supply, we can see that the lower the device’s resistance, the more current it draws.

The other formula is Watt’s Law which states
equation 4This says that in Fig. 1, the power (P) dissipated by the device is the product of the voltage (V) across it and the current (I) that it draws. Note that P depends equally on V and I. Note also that if a device takes, say, 100 W, this could be delivered either as 100 V at 1 A, or 1 V at 100 A, or any other VI combination that multiplies out to 100. However in considering these variations we must not forget the device’s maximum input voltage rating.

At this point, let’s imagine that the device in Fig.1 is a microprocessor, and that it draws 100 A at 1 V. These are not real figures, but they are representative of what could be expected from a desktop PC chip. From Ohm’s law we can see that it has an internal resistance of V/I = 1/100 = 0.01 Ohm. As we move from Fig.1 towards reality – but without yet giving ourselves the benefit of modern power train technology – we recognise that the chip’s 1 V supply has to travel along copper tracks connecting the board’s power connector to the chip’s power pins. These copper tracks have a finite resistance. If they are 5 mm wide and 5 cm long this could be approximately 0.005 Ohms. Note that the track resistance is directly proportional to the track length. Fig. 2 represents the electrical view of this scenario.

figure 2 microprocessor on a PCB

Fig. 2: Microprocessor load on a PCB

In Fig 2, R1 represents the PCB track resistance of 0.005 Ohms while R2 represents the microprocessor internal resistance of 0.01 Ohms. Accordingly, if we now apply 1 V to the board edge connector, Ohm’s law shows that the current would be
equation 1not the 100 Amp the chip may require. It also shows that there would be a voltage drop across the track of 66.7 x 0.005 = 0.33 V – leaving only 0.67 V across the chip. Track resistance has deprived the chip of both voltage and current.

Consider also the power lost through dissipation in the track. If P = IV and V = IR, then by substituting for V we can see that P = I2R. So, the power dissipated in track resistance R1 is 66.72 x 0.005 = 22 W. In this case the Distribution Loss, or I2R Loss, is 22 W.

These losses and inefficiency are clearly unacceptable, so what can we do? To see the answer, let’s suppose that our microprocessor is a very flexible or even magical device that can work on a 50 V input supply just as well as on 1 V. We know that it draws 100 A at 1 V, so according to Watt’s Law it consumes 100 W. Therefore it will require 2 A at 50 V. Importantly, in this new scenario, the device’s internal resistance would have increased from 0.01 Ohm to 50/2 = 25 Ohm.

So if we apply 50 V across the connector in Fig 2, the actual current will be
equation 2a very small loss in current to the device. The voltage drop across the track is just 1.996 x 0.005 or 0.0099 V. The chip is enjoying nearly all of the available voltage and current. The dissipation across the track is 0.02 W. By transmitting power at a higher voltage and lower current, we have delivered it where it is needed, with minimal transmission losses.

All of this prompts the question: If higher voltage levels are so beneficial for power transfer efficiency, why don’t chip makers produce devices with higher input voltages? The answer is that the manufacturers are driven by their own priorities, which are to deliver ever more processing performance while curbing the associated increase in power demand. Reducing the chip’s supply voltage contributes to both these objectives. So, over the years we have seen chips demanding steadily increasing current at steadily reducing voltages; a formula for increasing distribution losses! Pressure on power system designers to find a way of supplying the chip efficiently has grown accordingly.

An obvious solution would be to place a DC-DC converter physically close to the microprocessor chip. This can receive power at a high voltage and low current and convert it into low voltage and high current suitable for the chip. I2R losses are minimized because the track length and resistance between the converter and chip are minimized. In fact one of the earliest responses was to use exactly this approach. However continued development in electronic device and PCB design rendered it increasingly unsatisfactory. In reality, DC-DC converters had to handle isolation and voltage regulation as well as conversion, and their own size and electrical efficiency became increasingly important issues. This was especially true as the number of devices on a PCB, and the supply voltages they demanded, proliferated steadily.

For these reasons, more advanced approaches such as Intermediate Bus Architecture (IBA) and Factorized Power Architecture (FPA) as previously mentioned, have appeared. They are sophisticated tools that tackle these complex issues, yet the fundamental, underlying problem they solve remains the same: How to deliver increasing levels of power to increasing numbers of ever lower voltage devices, while minimizing distribution I2R losses en route to the point of load.

Tags: , , , , ,

One Response to:
Back to Basics: Understanding and Mitigating the Growing Problem of Distribution Losses

  1. […] Back to Basics: Understanding and Mitigating the Growing Problem of Distribution Losses […]

Find out more about our Cool-Power Buck Regulators DCMs - A better Brick subscribe to vicor newsletter Contact Us

Get Connected