You are currently on IBM Systems Media’s archival website. Click here to view our new website.


Using Largely Off-the-Shelf Components, IBM Research Is Developing a Model for a Pizza-Box-Sized Data Center

IBM scientist Ronald Luijten, who is working on DOME, writes a formula on the window outside of his office, which is part of analytically optimizing processor chip power consumption and program parallelism that will be used for the project.

Imagine a data center in a box—one that uses a fraction of the energy a typical data center does, one that’s cooled by hot water, one that focuses on a free-computing model emphasizing data-centric computing.

That’s what Ronald Luijten, IBM Research data motion architect, has done. In fact, he’s proved it on a small scale, using a smartphone-sized computer based on an embedded chip from Freescale to host both Linux* and IBM DB2*.

“At the end of the day, we don’t care about the machines. We don’t care about how fast the systems run. What we care about is the answer and the cost and time to get to the answer.”
—Ronald Luijten, IBM Research data motion architect, holding the microserver

Although his long-term goal is in fact a pint-sized data center, Luijten and his team are now focused on helping researchers from ASTRON—the Netherlands Institute for Radio Astronomy—build an IT roadmap to process the petabytes of data being gathered by the Square Kilometre Array (SKA), known as the DOME project. This work will help unlock some of the mysteries of the universe on a relatively modest budget. IBM Systems Magazine spoke with Luijten on the DOME project advancements.

Q. Why were you interested in developing a microserver?
About five years ago, I recognized that we have about five brick walls that are going to hit us in the data center. Energy is one of them. Everybody agrees that energy use is a major concern. Then we have von Neumann’s bottleneck, or the memory wall. We also have the multicore programming wall. All of our computer science students learn how to program in Java*, which doesn’t support multicores at all. Intel* might put 128 cores on a chip, but people coming out of universities are only going to use one of them.

Then we have Moore’s Law. That actually stopped scaling a long time ago in terms of single-thread performance, so the only way we can improve system performance is by merging more components together in a denser way, but this has resilience issues, or a resilience wall. People involved in high-performance computing (HPC) are worried that smaller-scale machines will run only three minutes before a service engineer has to replace a part. Today, the mean time between HPC failures is about two days, which also isn’t great, but you can at least get a computation done in two days. You won’t be able to do that in three minutes. So focusing on density and integration is what I am doing with the microserver.

Then, finally, we have the algorithm wall. This is less well known, but given the fundamental challenge we have, we really need to rethink the algorithms. So five years ago I started a project that later got funded as part of DOME called “algorithms and machines” where we basically said we needed a holistic approach on how we go from the problem to the solution. For each problem, you have a choice of models. For each model, you have a choice of algorithms to solve it. For each algorithm, you have a choice of implementations. For each implementation, you have a choice of systems to run it on. This leads to an ineffective solution, and thus we need a holistic design approach to go from the problem to the solution, because at the end of the day, we don’t care about the machines. We don’t care about how fast the systems run. What we care about is the answer and the cost and time to get to the answer and that is the basis for data-centric computing.

Q. You’ve been quoted as saying that computing is free but data is not. What do you mean by that?
It became clear we were at the start of a revolution—and the reason why I say the computer is free, data is not. We’re now at the stage where anyone can now build a high-quality, high-performance server at a very low cost, or almost free. Data, however, is not free, because business value, in fact, lies in the big data and not in the compute. The other insight comes up when you look at energy. It turns out that 98 percent of the energy in any server—whether you look at the chip level or the box level or at the data-center level—is used to move data from point A to point B. The other 2 percent is used when people conduct value-add computing functions, like calculating interest on a mortgage.

Jim Utsler, IBM Systems Magazine senior writer, has been covering the technology field for more than a decade. Jim can be reached at

Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.



2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

Safely Concealed

IBM Identity Mixer is poised to change how Web users reveal personal data

Ups and Downs

IBM and Stanford University push spintronics to smaller levels

Computing in 3-D

Chips could gain depth to keep delivering on Moore’s Law

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
Mainframe News Sign Up Today! Past News Letters