Capacitance at the input of the DC-DC converter plays a vital role in keeping the converter stable and playing a role in input EMI filtering. Large amounts of capacitance at the output of the DC-DC converter can provide significant challenges in the power system. Many loads downstream of the DC-DC converter need capacitance for proper operation. These loads can be pulsed power amplifiers or other converters that need capacitance at their inputs. If the capacitance at the load exceeds the value that the DC power system is designed to handle, the power system can exceed its maximum current rating at startup and during normal operation. The capacitance can also cause power system stability issues and lead to improper system operation and premature power system failure.

A few simple techniques can be implemented within the power system to maintain an efficient and reliable design when powering highly capacitive loads. Reducing the voltage rise time across the load capacitor at start up will keep the power system within its current rating, controlling the charge current into the capacitor during operation will keep the power system within its power rating and adjusting the control loop of the system will keep the power system stable and within system voltage ratings.

**Start up Considerations**

At start up, the typical DC-DC converter has a standard rise time set by the rise of an internal error amplifier reference. A discharged capacitor placed at the output of the converter will appear as a low impedance load. With this low output impedance, a few switching cycles of the converter can cause a change in voltage across the capacitor high enough to force the converter to exceed its output current rating. The capacitor can be pre-charged through a higher impedance path at the output of the converter. This high impedance element will limit the charge current into the capacitor until the capacitor is charged to a pre-defined voltage level. Once the pre-defined voltage level is reached, the high impedance path can be removed or shorted by a low impedance device such as an FET.

The converter can deliver its full rated current through this lower impedance path. When the FET shorts the impedance path, the full voltage of the converter is allowed to charge the capacitor. The turn on time of the FET and the voltage differential between the capacitor and the converter voltage will determine the charge current needed to bring the capacitor to full voltage and thus it is important to set the pre-defined voltage level to a point where the FET turn on does not cause the converter to exceed its current rating. The block diagram in Figure 1 can be used to charge a capacitor to a preset voltage minimum. U2 controls the FET that shorts the impedance Z and the U1 circuit works in conjunction with U2 to set the turn on voltage and the load enable.

At start up the converter will see the capacitor as the load as well as the system loads after the capacitor. If the system load is demanding current from the capacitor during the high impedance pre-charge, the capacitor may not achieve the pre-set charge voltage. Many downstream loads to the DC-DC converter have an under voltage lockout, under which they will demand little current. If the load does not have an under voltage lockout above the pre set charge voltage then an external Enable should be used. If the load is resistive in nature, a series switch can be used to enable voltage to load after the capacitor is charged. Figure 2 shows the voltage and current of a system charging a 10 mF capacitor.

Once the capacitor is charged the load can begin to draw current from the capacitor and DC-DC converter. There are loads that demand current rapidly and the current will be delivered by the capacitor if the demand is outside the converters bandwidth. Once the current is delivered by the capacitor the voltage across capacitor drops:

Where Vdrop is the voltage drop across the capacitor, I is the current demand, C is the capacitor value and dt is the duration of the current draw. The converter will recharge the capacitor to its original value and in doing so, the converter can exceed its current rating. The voltage differential between the converter and the depleted capacitor divided by the resistance between the two voltages will determine the desired recharge current. The resistance between the two voltages is typically very low to reduce system losses and thus the desired recharge current can be higher than the converter maximum. Because the voltage of the capacitor is near the set point voltage of the converter, exceeding the converters maximum current can also exceed the converter’s power maximum.

Click here to read the second part of * Powering Highly Capacitive Loads with DC-DC Converters*.

*This article was originally published in EDN.*