You are currently on IBM Systems Media’s archival website. Click here to view our new website.


Bookmark and Share
RSS

Recent Posts

PowerVM Processor Virtualization 101 Part 2

June 25, 2014

Chris Francois completes blog part 2 on PowerVM Processor Virtualization 101. Read PowerVM Processor Virtualization 101 part 1 if you missed it. 

This is the second installment in a two-part blog series to provide basic introduction to the PowerVM processor virtualization terminology and acronyms. The first part was heavy on terminology. This part will tie the terminology together and provide you with a better understanding of the trade-offs involved with PowerVM LPAR configuration as it pertains to processor virtualization. The IBM Redbooks publication, “IBM PowerVM Virtualization Introduction and Configuration” provides much greater detail. For an in-depth treatment of this material from an IBM i perspective, see “Under the Hood: POWER7 Logical Partitions.”

PowerVM implements serial sharing of physical processors, with entitled capacity commitments enforced through periodic intervals of time called “dispatch windows.” The sharing is serial because the physical processor is dispatched exclusively to one partition at a time, regardless of the processor's thread context (e.g., SMT4). Entitled capacity represents a claim on a fraction of physical processor dispatch time in the dispatch window. For example, assuming the dispatch window period is 10 milliseconds, 2.0 processor units entitled capacity is a claim on 20 milliseconds of physical processor dispatch time. These entitled capacity claims are “use it or lose it”; every dispatch window the LPAR's entitled capacity commitment is replenished without regard to history. For POWER8, PowerVM requires a minimum of 0.05 processor unit and a maximum of 1.0 processor unit per virtual processor. The total current entitled capacity of all shared processor LPARs cannot exceed the number of processors assigned to the physical shared processor pool, and the current number of dedicated processors cannot exceed the balance of licensed processors in the platform. This is a roundabout way of saying that while there can be more virtual processors than physical processors (up to 20 times), the entitled capacity can never be overcommitted.

The differences between a shared and dedicated processor LPAR goes beyond the fact that a dedicated processor LPAR has a fixed ratio (i.e., 1:1) of entitled capacity units to virtual processors. A shared processor LPAR can be configured for uncapped sharing mode, meaning that it is able to use excess shared processor pool capacity above and beyond its entitled capacity. For an uncapped LPAR, the uncapped weight offers some control over the relative distribution of excess shared processor pool capacity among competing uncapped LPARs. A dedicated processor LPAR can be configured for processor sharing, meaning that the operating system can chose to allow the LPAR's idle virtual processor[s] to be temporarily donated to the physical shared processor pool. Oftentimes, this is an effective way to increase the excess shared pool capacity available to uncapped LPARs, and normally the performance impact on the donating LPAR is negligible, as the physical processor is returned to the donating LPAR upon demand.

The other major implementation differences mainly impact performance:

  • • Resource Isolation – While POWER systems and PowerVM provide secure data isolation between LPARs, the physical nature of serially sharing physical processors can impact the effectiveness of processor caches and other hardware resources. For a dedicated processor LPAR, the operating system has greater control over processor sharing and associated performance impacts.
  • • Processor Affinity – The association between a virtual processor and its underlying physical processor are not set in stone, but for a dedicated processor LPAR, the associations are much more durable than for a shared processor LPAR. The VCPU of a shared processor LPAR may be dispatched to any physical processor of the shared processor pool, whereas the VCPU of a dedicated processor LPAR is generally dispatched to the same physical processor. Architecturally, virtual to physical processor associations can change at any time, but for a dedicated processor LPAR, they tend to remain constant during partition activation. Exceptions are processor DLPAR, Live Partition Mobility, and Dynamic Platform Optimizer operations. Software optimizations based on processor affinity are generally more effective for dedicated processor LPARs than for shared processor LPARs.
  • • VCPU Latency – Shared processor LPARs can incur entitlement delays, which are the result of entitled capacity being exhausted during the dispatch window, and have a greater potential for VCPU dispatch delays, which are the result of over subscription of the physical shared processor pool at any moment. Dedicated processor LPARs don't experience entitlement delays, and VCPU dispatch delays are generally negligible.
  • • I/O Latency – Interrupts from I/O adapters assigned to shared processor LPARs are routed to any physical processor of the shared processor pool. Sometimes the interrupt can be handled directly, but sometimes it must be forwarded to a VCPU of its assigned LPAR. This forwarding can be a source of latency which does not occur for the interrupts of I/O adapters assigned to dedicated processor LPARs.

There you have it...PowerVM processor virtualization in a nutshell. PowerVM's flexible, industrial strength processor virtualization supports a range of options and features to maximize the utility of the Power Systems platform. For more in-depth coverage, the website, “Server virtualization with PowerVM” is a comprehensive source for this and other PowerVM topics.

References

IBM i 7.2 and POWER8

PowerVM Processor Virtualization 101

Under the Hood: POWER7 Logical Partitions

IBM PowerVM Virtualization Introduction and Configuration

Live Partition Mobility 

Dynamic Platform Optimizer – Affinity and Beyond for IBM Power

 

 

Posted June 25, 2014| Permalink

-->