You are currently on IBM Systems Media’s archival website. Click here to view our new website.

AIX > Administrator > Performance

Running PerfPMR Scripts: Configuration and Network

AIX configuration information

This is the second installment in a four-part series. Read part one here.

In part one, I looked at different ways to run the PerfPMR diagnostic utility as a whole, focusing on some important flags that aid in the proper execution of PerfPMR. Some of these flags help capture important systemic events like program starts and stops, while others allow us to customize PerfPMR output. Now we’re taking the concept of PerfPMR customization to a deeper level. I’ll show you how to execute only the parts of PerfPMR you’ll need in any given situation. I'll also explain how to add your own scripts so they're executed along with the rest of PerfPMR.

In part one I noted that the majority of the executable files that are created when installing PerfPMR have a .sh suffix appended to their names. Once you install PerfPMR, do a long listing in your installation directory and you’ll see these *.sh files. The files that are so appended can be run independently of the rest of PerfPMR. While PerfPMR addresses dozens of informational categories, very often in your own diagnostic efforts, you don’t need all of the voluminous data a full run of PerfPMR provides. In fact, it's very likely you'll focus on one issue at a time. I know that's typical in my own performance practice. Maybe you want only networking data or only storage data. Perhaps you're interested in AIX kernel traces, or you'd just like to look at locking activity.

Thankfully, the PerfPMR developers make it easy to drill down to perform a specific task. With this in mind, let's get into executing individual scripts with PerfPMR. Mind you, this is far from a complete description, but what follows should give you an idea of PerfPMR’s flexibility.


All of my customers – whether they have one AIX system or thousands – keep some sort of record of the configuration of those systems. Generally, these records are compiled into a concise operational “run book” that is easily referenced by everyone in an IT organization. In short, configuration is a big deal, and PerfPMR is a big help in this regard, because it allows you to get detailed configuration information with just a few easy commands.

One of the scripts that's part of the PerfPMR whole is called When PerfPMR runs in its default mode, also runs, producing roughly 50 files, each of which zeroes in on a different configuration aspect of your system. One of these files is called config.sum (sum being short for summary). Along with a prtconf and the output from an lparstat -i command, the config.sum file provides comprehensive information about the way your system is put together. Add or subtract whatever other commands – or files generated by the script – you wish, but config.sum can serve as the basis for a thorough record of configuration.

Generating the config.sum file is easy. As root, preferably in the /perfdata directory you’ve already created, run this command: -x

Then let PerfPMR chug and take note of the messages it leaves in your terminal session. Start by picking out the config.sum file and FTP it to your central repository of site information. You’ll have a complete run book in no time.

But before we leave, let’s take a look at some of the files it produces:

tunables.sum – As a performance specialist, it’s my job to understand how CPU, memory, networking and storage tunables are set in each of my customers' systems. Were I to generate all of the files I needed to do this, I'd have to run separate commands to list each of the VMO, SCHEDO, IOO, NO and NFSO settings, and then compile all of these lists into one file. Instead, PerfPMR does this automatically. The script generates a tunables.sum file in two parts. It first issues the appropriate tunable command (e.g., vmo) with the -Fa flag. This is useful for current tunable settings, but now we get to the detailed stuff. We have a list of each tunable setting with its current, default and boot time settings, as well as the acceptable range of values allowed for each tunable. These lists are generated in each tuning category with the -FL flags. I always put the -FL lists in my run books so I have every tunable setting at my fingertips should a system run into performance difficulties.

mem_details_dir – This directory contains a number of files that tell you how memory is used in your system. There are files written by the svmon command that provide detailed process, segment and user data. mem_details_dir also includes a file called memdetails.out, which tells you where every page of memory in your system is allocated, including kernel heap, file and text data. The contents of this directory are often all you’ll need to debug many memory issues, so be sure to refer to memdetails.out in any diagnostic undertaking.

instfix.out – This file lists every APAR that's applied in your system. It's very handy for troubleshooting AIX upgrade difficulties.

lssrad.out – This file lists your system's Scheduler Resource Allocation Domain identifiers (SRADs); these are logical groupings of CPUs and memory. lssrad.out is an essential starting point when tracking down poor thread affinity and orphan memory.

lsrset.out – This file lists your system's resource sets. RSETs let you build an “LPAR-within-an-LPAR” and isolate different workloads in your system. They also come in handy when you have workloads that compete with one another for CPU time and memory. Speaking of resource sets, you’ll need the information in the mempools.out and vmpools.out files that the script generates to build them effectively. (I recently devoted an article to RSETS. Read it here.)

Mark J. Ray has been working with AIX for 23 years, 18 of which have been spent in performance. His mission is to make the diagnosis and remediation of the most difficult and complex performance issues easy to understand and implement. Mark can be reached at

Like what you just read? To receive technical tips and articles directly in your inbox twice per month, sign up for the EXTRA e-newsletter here.



2019 Solutions Edition

A Comprehensive Online Buyer's Guide to Solutions, Services and Education.

Achieving a Resilient Data Center

Implement these techniques to improve data-center resiliency.


AIO: The Fast Path to Great Performance

AIX Enhancements -- Workload Partitioning

The most exciting POWER6 enhancement, live partition mobility, allows one to migrate a running LPAR to another physical box and is designed to move running partitions from one POWER6 processor-based server to another without any application downtime whatsoever.

IBM Systems Magazine Subscribe Box Read Now Link Subscribe Now Link iPad App Google Play Store
IBMi News Sign Up Today! Past News Letters