Artificial Intelligence Requires a Fresh Look at Power Optimization

March 7, 2018
By

AI Needs PoP Power TechnologyThe huge gains in processing performance achieved in recent years have begun to unlock the vast potential of artificial intelligence (AI), propelling it from nascent technology enabling image and speech recognition applications, and onward toward breakthrough advancements in autonomous vehicles, industrial automation, medical diagnostics, scientific research and far beyond. Taking advantage of ongoing innovation in algorithm science and machine learning techniques, inference models can be trained and run ever faster to extract deeper intelligence from huge data sets, leveraging simulated neural networks designed to mimic the behavior of the human brain.

The newest generation of CPUs, GPUs, and ASICs – “XPUs” in shorthand – are helping to deliver the ultra high-speed parallel processing capabilities needed for compute-intensive AI applications.  Whether deployed in proprietary supercomputers or in cloud datacenters providing on-demand compute services, XPUs optimized for AI, deep learning and similarly demanding applications must be architected into the host system in a manner that ensures the highest possible performance profile, and minimal power loss. Given the massive investments of time and capital that XPU suppliers and their customers devote to processor-level and system-level optimizations, respectively, wasted performance and power equals wasted opportunity to achieve the full potential of these advanced technologies.

The Vicor award-winning Power-on-Package solution overcomes these challenges by delivering 48V directly to the XPU socket to enable a 10X reduction in the number of socket pins needed for power, freeing up additional pins for expanded I/O connectivity, while eliminating the power distribution losses associated with the delivery of high operating current from the motherboard to the XPU. Unlike conventional voltage regulators, Power-on-Package is distinguished as the only module solution in the industry with the required density to allow it to be placed in the available space on the XPU substrate, simplifying motherboard design while enabling previously unachievable XPU performance profiles.

Performance, power and space efficiency are of course critically important attributes for all computing applications. But the rapidly accelerating evolution of AI – and its untold promise for autonomous machine learning and problem solving – invites a fresh look at the underlying technology architectures we’ll employ to cultivate it. The sophisticated XPUs at the nerve center of advanced AI technologies demand a new power architecture, and Power-on-Package was conceived to address these requirements head on.

 

More Information:

Tags:

Leave a Comment

Your email address will not be published. Required fields are marked *

Find out more about our Cool-Power Buck Regulators subscribe to vicor newsletter Contact Us

Get Connected