You are currently on IBM Systems Media’s archival website. Click here to view our new website.

MAINFRAME > Business Strategy > Competitive Advantage

Don’t Fall Into the Internal Throughput Rate Trap

Competitive Advantage

Many people can explain that the z Systems platform has the power to do things that other machines can’t. Others express proof of their belief that such assertions aren’t true by quoting benchmark results, or showing the power and scalability of aggregated clusters of smaller machines. This has been going on for a long time, and it’s largely because the participants fall into the Internal Throughput Rate (ITR) trap.

To effectively resolve the issue, we need to understand what is meant by compute power. This article will explain compute power by defining underlying performance and capacity fundamentals:

  • ITR
  • External Throughput Rate (ETR)
  • Response time

Compute Power

Mainframes, enterprise servers, cloud, converged and supercomputer infrastructures are described as possessing massive compute power. These IT solutions are massive when compared to cell phones, tablets, laptops or devices in the Internet of Things. However, to usefully describe this power, we need to quantify what these solutions can do beyond using descriptive language.

Inevitably, we ask “How much work can this thing do?” This is the question that leads to the notion of system throughput. The obvious and most simple quantification of power is to count cores, the processing elements of an IT solution. Those with massive compute power contain dozens, hundreds or even thousands of cores, thus earning its description.

Defining Throughput

Counting cores makes the bad assumption that all cores are created equal. By this logic, we could hypothetically calculate work per unit time by simply multiplying the core count by clock frequency. However, there are three problems with this. It assumes that:

  1. All cores do the same amount of work per clock cycle
  2. All cores execute one thread of work at a time
  3. N cores can do N times the work of one core

None of these are true. Luckily, the following definition of throughput avoids these pitfalls while retaining a relatively simple model related to machine characteristics.

Throughput Rate (TR) = Thread Count (TC) x Thread Speed (TS)

Using thread count rather than core count allows us to include modern multi-threaded cores. Using thread speed rather than clock rate allows us to account for compiler, hardware design and scaling effects, all of which clock rate ignores.

This model also has the advantage of lumping all of the adjustments into thread speed, for which proxy measurements can be readily found and successively refined. For example, measuring a single thread of transactions on one core establishes a baseline speed. Other cases involving more threads and/or more cores establish the effect of scaling.

Joe Temple is a retired IBM Distinguished Engineer and principle consultant of Low Country North Shore Consulting.

Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.



2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

Agile Thinking

How to leverage existing technology to gain a competitive advantage


Adapt Your Organization to the API Economy to Generate New Revenue

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
Mainframe News Sign Up Today! Past News Letters