Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Can FPGAs or Reconfigurable Processors Go Mainstream?

One of the most interesting trends I've seen in server computing is the move away from standard CPUs and toward doing more processing on graphics chips (GPUs) and reconfigurable processors known as field programmable gate arrays (FPGAs).

June 30, 2015
SRC Saturn 1 Server

One of the most interesting trends I've seen in server computing is the move away from standard CPUs and toward doing more processing on graphics chips (GPUs) and reconfigurable processors known as field programmable gate arrays (FPGAs). This phenomenon is often referred to as heterogeneous computing.

The concept here isn't new—GPUs and other accelerators have been increasingly common in high-performance computing (HPC) or supercomputers for years. But lately, we've been hearing more about how Intel has customized some server chip packages to include FPGAs in addition to the traditional CPU, aimed mainly at big hyperscale cloud computing providers that have specific algorithms they can run as hardware instructions on the FPGAs. This should be much faster than executing them as software on the more general CPU instructions.

This was a key driver of Intel's recent plan to acquire FPGA maker Altera. Intel CEO Brian Krzanich said he expects up to 30 percent of cloud workloads to have some sort of FPGA acceleration by the end of the decade. Microsoft is already using Altera FPGAs to power many of its cloud services such as Bing search.

There has been one big obstacle to most companies using FPGAs—or for that matter GPUs—in more typical corporate computing cases: making the software work concurrently on these chips alongside the CPU is just hard. (For corporate workloads and even HPC, you'll always need some CPUs; in other kinds of applications, such as networking, hardware companies may just use an FPGA.) For GPU computing, we've seen things like Nvidia's CUDA and the Khronos Group's OpenCL standard, which make things easier, and we've certainly seen a lot of HPC and machine-learning algorithms use GPUs. Now FPGA makers such as Altera support OpenCL as well, but in the more general corporate computing case, it's proven too difficult.

Lately, I've talked to a couple of companies that hope to make this easier.

Bitfusion is a startup I first saw at TechCrunch Disrupt; its technology is aimed at letting you move an application from the CPU to a GPU or FPGA without rewriting it for each platform. As CEO Subbu Rama explained, the package now works by looking for common open-source libraries used by software developers and replacing the functions within them with functions that can take advantage of the GPU or FPGA. As he explained, big companies might be able to do a rewrite of their code, but mid-market companies cannot. Applications include scientific computing, financial applications such as risk analysis and high-frequency trading, and data analytics such as working with oil and gas sensor data.

Eventually, he said this could work with any language that calls such libraries. He noted that replacing the libraries may not be quite as efficient as writing custom code for FPGAs or GPUs, but it's much easier.

Bitfusion plans to offer its products in three different models—as pure software for companies that already have their own accelerators; pre-installed on appliances; or for applications deployed in the cloud, through a partnership with Rackspace. Initially, this will use Altera FPGAs, though the company says it could work with other processors as well. Rama says initial customers are using this now, with public availability planned in the next couple of months.

SRC Saturn 1 Server

SRC is taking a somewhat different approach. It has been creating "reconfigurable servers" for government agencies since 1999, and is now making a solution aimed at hyperscale data centers and Web operations. Called the Saturn 1 server, it is a cartridge that plugs into HP's Moonshot chassis, and SRC claims it can provide computer performance that is typically 100 times faster than that of traditional microprocessor designs. (The company also sells larger rack-mounted and full-size systems, typically for its larger customers.)

What makes this different is a special compiler known as Carte, which converts the code to a silicon design that can run on FPGA architecture, according to CEO Jon Huppenthal. He told me SRC has spent years creating the compiler, initially for business customers, since the firm was founded by supercomputer pioneer Seymour Cray and Jim Guzy in the 90s. One difference in SRC's approach, he said, is that Carte isn't meant for generic systems, but rather is tied specifically to SRC's architecture, which gives it more performance and consistency. The Saturn 1 uses two Altera FPGAs—one that runs user code; the other that keeps the system running quickly, along with one Intel processor. Currently, he added, the company is on its 12th generation of reconfigurable processors.

This is a more expensive solution, aimed mostly at rather large computing centers, but it is still more accessible than earlier approaches.

The idea of using FPGAs or reconfigurable processors for more mainstream tasks is not a new one. However, it has taken a long time for this to become even a possibility for more traditional customers outside of hardware designers or military applications. These new approaches may presage the start of this technology becoming more commonly used—but only if the price/performance improvements really match up with vendor claims and the technology becomes easier to use. The new approaches are a step in that direction.

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About Michael J. Miller

Former Editor in Chief

Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. From 1991 to 2005, Miller was editor-in-chief of PC Magazine,responsible for the editorial direction, quality, and presentation of the world's largest computer publication. No investment advice is offered in this column. All duties are disclaimed. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed, and no disclosure of securities transactions will be made.

Read Michael J.'s full bio

Read the latest from Michael J. Miller