The SWAMI
of VSE/VSAM

 

General Questions and Answers
What about VM/VSE environments?

VSE -- "Native" and "Under VM"

When VSE runs as a guest operating system in a VM environment, there are certain special considerations -- particularly dealing with performance and integrity

Many times VM/VSE shops will run multiple production VSE guests with batch and/or on-line workloads. Normally, the amount of additional CPU processor load caused by running VSE as a guest of VM can be from a few percent to as much as 40-50 percent.

VM Performance Measurement and Tuning Basics

Some of the factors that impact performance in VM/VSE environments can be minimized simply, while others require insightful tuning.

  • The amount of VM CPU overhead depends on the VM environment itself, and the amount of VM services required for the workloads running in the guest(s). Almost all VSE usage of VM services occurs during processing of I/O requests. If applications do as little I/O (in terms of the number of I/O requests, and the complexity of those I/O requests) as possible, then the VM overhead will be as low as possible for a given environment.

    Measuring VM overhead -- overall or for a single guest -- can be done readily by the CP INDICATE or CP INDICATE USER commands, which show both CP's measured TOTAL TIME (virtual machine CPU time plus CP's overhead time on behalf of that virtual machine) and VIRTUAL TIME (which is the virtual machine time alone). Using this technique, compute the T-V time (TOTAL TIME divided by VIRTUAL time). To directly see the overhead, subtract 1.0 from the result of the division.

  • VM environments (from low to high overhead) include:
    1. V=R (or V=F) guest with no minidisk I/O

      VM overhead (as measured by VM's INDICATE USER command) will be on the order of 5 to 10 percent (0.05 to 0.10, or a T-V ratio of 1.05 to 1.10) in this mode.

    2. V=R (or V=F) guest with minidisk I/O

      VM overhead (as measured by VM's INDICATE USER command) will be on the order of 25 to 30 percent (0.25 to 0.30, T-V ratio 1.25 to 1.30) in this mode.

    3. V=V guest

      VM overhead (as measured by VM's INDICATE USER command) will be on the order of 40 to 50 percent (0.40 to 0.50, or a T-V ratio of 1.4 to 1.5) in this mode.

    The above estimates of overhead include a fairly typical VM/VSE batch and on-line workload mix with significant I/O activity.

    If there is little or no I/O activity, then there may be little difference among these environments, as most VM overhead experienced by VSE guests is due to I/O handling.

    Often, VM users choose the V=V guest with minidisk I/O because of its flexibility -- it is very simple to set this environment up to permit full sharing of all your disk devices -- which allows great operational flexibility in that any job can potentially run in any guest.

    This flexibility comes at a price, however. In addition to the VM overhead involved with minidisk I/O, there will be additional I/O (and overhead for it) associated with VSE inter-system locking mechanism. In the ideal case, this too can be minimized if separate minidisks are defined for each guest and ony those files which MUST be shared are placed on minidisks which are shared. Then, lock I/O activity will only occur for those files which must be shared.

    Consider the situation when careful discrimination between files which MUST be shared and other files which could be kept separate is ignored:

    • All (or almost all) disks are defined as shared.
    • All access to any files on those disks require sharing services.
    • OPEN processing times are elongated.
    • Throughput is reduced due to higher CPU consumption and longer delays during critical processing, such as OPEN.

    Compare this to a different, much lower overhead solution:

    • Only those disks which have files which MUST be shared are defined as shared.
    • Only access to those files require sharing services, and all other file accesses are done without any sharing considerations.
    • OPEN times are not delayed because of VTOC or catalog enqueue activity.
    • Batch cycle times and job elapsed times are reduced, as more CPU power is available to be applied to the workload.

    Exploiting VM features for VSE performance benefits

    In many cases, careful users can buy back more performance for their VM/VSE environments by using VM performance features:

    • VM Virtual Disk for VSE Lock File
    • Additional, smaller, unshared VM minidisks instead of fewer, larger and shared disks
    • DB2 for VM using VM shared data spaces for less CPU overhead and reduced I/O compared to DB2 for VSE
    • VM V=R or V=F environments instead of V=V environments for production workloads
    • VM dedicated disks instead of minidisks for high volume I/O activity

   
 
Up to the
Q and A
Page
Up to the
General
Q and A Page
E-Mail
The Swami
Go to
The Swami's
Home
 

This entire site -- including all its pages and content --
are the intellectual property of and copyright © 2002-2003 by
Dan Janda, theswami@epix.net