You are currently on IBM Systems Media’s archival website. Click here to view our new website.

MAINFRAME > Administrator > Workload Management

What’s Really Driving Peak CPU Usage?

In many cases, batch processing accounts for as much as 40 percent


Illustration by Daniel Bejar / theispot.com

Many installations using some form of subcapacity pricing believe their peak CPU usage is driven by their online workload. But is that really true?

During my tenure with MVS Solutions, I often asked customers what they thought drove their usage peaks. Typically, they’d say the peaks were entirely dependent on online processing. However, I’ve long suspected that batch processing also plays a significant role. So I decided to see if my suspicions were justified.

Some colleagues and I devised a way to analyze monthly usage and calculate both the overall four-hour rolling average and the four-hour rolling average for batch. The results among the participants in our study indicated that batch processing accounted for a surprising 30 to 40 percent of the peaks in many cases. That’s a significant contributor to usage and ultimately results in increased costs.

 

At the Source

Instantaneous spikes in usage, however, aren’t really the issue, other than the fact they drive the four-hour rolling average higher. Peaks on a given LPAR aren’t usually the problem either, because most costs are based on the highest four-hour average usage of the month on each physical machine or central electronic complex (CEC). Of course, all LPARs contribute to this average, so any approach to controlling your peaks has to take this into account.

To help control four-hour rolling averages, IBM provides a mechanism, known as soft capping, in which you can specify a limit for the four-hour rolling average. All workloads running on the CEC are affected to some degree when capping causes usage to be compressed below the level the demand requires. So it’s important to plan your cap level carefully. If you set it too low, it might interfere with your ability to meet service objectives for online and high-priority batch work. But if you set it too high, you don’t get the savings you might be seeking.

Is there a way to reduce your peaks safely, without lessening the service needed for really important work? Recognizing that a business’s online systems are what they are and must get what they need, the focus falls on batch processing.

Ideally, low-importance batch workloads would be slowed down as the four-hour rolling average approaches its capped level. This prevents overloading the machine with work that can be run later. It also minimizes the impact of running into the cap, thus avoiding any negative effects on your online and critical batch workloads. Unfortunately, that’s difficult to detect, especially given that instantaneous peaks and four-hour rolling average peaks often don’t coincide.

 

Setting Priorities

The first step is to define the batch workload in terms of importance. This can be done using existing service-level agreements. Another good source of de facto service-level information is your disaster recovery (DR) plan. A DR plan typically details the work that must be run in the event of a major problem. That workload is your highest priority.

A DR plan is likely to detail a second level of work that should be run, if possible. That level is your next highest priority. Those two levels of priority most likely include all or most of the batch workload critical to the business and to supporting the online processes.

Clearly, you want those high-importance workloads to get good service constantly, even when capping is in effect. The remaining workload can be subdivided, too, but any lower-importance work that doesn’t directly impact the business can be grudgingly, but safely, sacrificed, if necessary. Typically, such lower-level batch workloads include testing, which is considered both less urgent and less important—except to application developers, of course.

 

Martin Wills is a product specialist at MVS Solutions Inc.



Advertisement

Advertisement

2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

What’s Really Driving Peak CPU Usage?

In many cases, batch processing accounts for as much as 40 percent

MAINFRAME > ADMINISTRATOR > WORKLOAD MANAGEMENT

Leverage DCOLLECT Data With (or Without) REXX

Four-Star Advantage

The Polaris Workshop maps out a custom platform strategy

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
Mainframe News Sign Up Today! Past News Letters